00:00:00.002 Started by upstream project "autotest-per-patch" build number 130487 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.056 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.057 The recommended git tool is: git 00:00:00.057 using credential 00000000-0000-0000-0000-000000000002 00:00:00.059 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.087 Fetching changes from the remote Git repository 00:00:00.090 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.223 > git --version # 'git version 2.39.2' 00:00:00.223 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.260 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.260 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.694 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.707 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.722 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:03.722 > git config core.sparsecheckout # timeout=10 00:00:03.738 > git read-tree -mu HEAD # timeout=10 00:00:03.756 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:03.778 Commit message: "kid: add issue 3541" 00:00:03.778 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:03.883 [Pipeline] Start of Pipeline 00:00:03.899 [Pipeline] library 00:00:03.901 Loading library shm_lib@master 00:00:08.893 Library shm_lib@master is cached. Copying from home. 00:00:08.945 [Pipeline] node 00:00:23.980 Still waiting to schedule task 00:00:23.980 Waiting for next available executor on ‘vagrant-vm-host’ 00:05:32.243 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:05:32.245 [Pipeline] { 00:05:32.257 [Pipeline] catchError 00:05:32.259 [Pipeline] { 00:05:32.275 [Pipeline] wrap 00:05:32.285 [Pipeline] { 00:05:32.294 [Pipeline] stage 00:05:32.296 [Pipeline] { (Prologue) 00:05:32.316 [Pipeline] echo 00:05:32.318 Node: VM-host-SM9 00:05:32.325 [Pipeline] cleanWs 00:05:32.335 [WS-CLEANUP] Deleting project workspace... 00:05:32.335 [WS-CLEANUP] Deferred wipeout is used... 00:05:32.341 [WS-CLEANUP] done 00:05:32.557 [Pipeline] setCustomBuildProperty 00:05:32.670 [Pipeline] httpRequest 00:05:33.071 [Pipeline] echo 00:05:33.074 Sorcerer 10.211.164.101 is alive 00:05:33.086 [Pipeline] retry 00:05:33.089 [Pipeline] { 00:05:33.105 [Pipeline] httpRequest 00:05:33.109 HttpMethod: GET 00:05:33.110 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:05:33.110 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:05:33.112 Response Code: HTTP/1.1 200 OK 00:05:33.112 Success: Status code 200 is in the accepted range: 200,404 00:05:33.113 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:05:33.257 [Pipeline] } 00:05:33.276 [Pipeline] // retry 00:05:33.285 [Pipeline] sh 00:05:33.566 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:05:33.580 [Pipeline] httpRequest 00:05:34.010 [Pipeline] echo 00:05:34.012 Sorcerer 10.211.164.101 is alive 00:05:34.022 [Pipeline] retry 00:05:34.024 [Pipeline] { 00:05:34.039 [Pipeline] httpRequest 00:05:34.043 HttpMethod: GET 00:05:34.044 URL: http://10.211.164.101/packages/spdk_71dc0c1e9d880d7a3a970b961426bccd54c6c096.tar.gz 00:05:34.045 Sending request to url: http://10.211.164.101/packages/spdk_71dc0c1e9d880d7a3a970b961426bccd54c6c096.tar.gz 00:05:34.046 Response Code: HTTP/1.1 200 OK 00:05:34.046 Success: Status code 200 is in the accepted range: 200,404 00:05:34.047 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_71dc0c1e9d880d7a3a970b961426bccd54c6c096.tar.gz 00:05:36.412 [Pipeline] } 00:05:36.429 [Pipeline] // retry 00:05:36.437 [Pipeline] sh 00:05:36.713 + tar --no-same-owner -xf spdk_71dc0c1e9d880d7a3a970b961426bccd54c6c096.tar.gz 00:05:40.004 [Pipeline] sh 00:05:40.282 + git -C spdk log --oneline -n5 00:05:40.283 71dc0c1e9 test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP 00:05:40.283 5495ea97a test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:05:40.283 41a395c47 test/nvmf: Remove all transport conditions from the test suites 00:05:40.283 0d645b00a test/nvmf: Drop $RDMA_IP_LIST 00:05:40.283 f09fa45e8 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:05:40.300 [Pipeline] writeFile 00:05:40.314 [Pipeline] sh 00:05:40.594 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:40.606 [Pipeline] sh 00:05:40.884 + cat autorun-spdk.conf 00:05:40.885 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:40.885 SPDK_TEST_NVMF=1 00:05:40.885 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:40.885 SPDK_TEST_URING=1 00:05:40.885 SPDK_TEST_USDT=1 00:05:40.885 SPDK_RUN_UBSAN=1 00:05:40.885 NET_TYPE=virt 00:05:40.885 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:40.891 RUN_NIGHTLY=0 00:05:40.893 [Pipeline] } 00:05:40.907 [Pipeline] // stage 00:05:40.923 [Pipeline] stage 00:05:40.925 [Pipeline] { (Run VM) 00:05:40.940 [Pipeline] sh 00:05:41.222 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:41.222 + echo 'Start stage prepare_nvme.sh' 00:05:41.222 Start stage prepare_nvme.sh 00:05:41.222 + [[ -n 1 ]] 00:05:41.222 + disk_prefix=ex1 00:05:41.222 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:05:41.222 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:05:41.222 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:05:41.222 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:41.222 ++ SPDK_TEST_NVMF=1 00:05:41.222 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:41.223 ++ SPDK_TEST_URING=1 00:05:41.223 ++ SPDK_TEST_USDT=1 00:05:41.223 ++ SPDK_RUN_UBSAN=1 00:05:41.223 ++ NET_TYPE=virt 00:05:41.223 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:41.223 ++ RUN_NIGHTLY=0 00:05:41.223 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:05:41.223 + nvme_files=() 00:05:41.223 + declare -A nvme_files 00:05:41.223 + backend_dir=/var/lib/libvirt/images/backends 00:05:41.223 + nvme_files['nvme.img']=5G 00:05:41.223 + nvme_files['nvme-cmb.img']=5G 00:05:41.223 + nvme_files['nvme-multi0.img']=4G 00:05:41.223 + nvme_files['nvme-multi1.img']=4G 00:05:41.223 + nvme_files['nvme-multi2.img']=4G 00:05:41.223 + nvme_files['nvme-openstack.img']=8G 00:05:41.223 + nvme_files['nvme-zns.img']=5G 00:05:41.223 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:41.223 + (( SPDK_TEST_FTL == 1 )) 00:05:41.223 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:41.223 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:41.223 + for nvme in "${!nvme_files[@]}" 00:05:41.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:05:41.223 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:41.223 + for nvme in "${!nvme_files[@]}" 00:05:41.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:05:41.223 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:41.223 + for nvme in "${!nvme_files[@]}" 00:05:41.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:05:41.223 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:41.480 + for nvme in "${!nvme_files[@]}" 00:05:41.480 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:05:42.048 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:42.048 + for nvme in "${!nvme_files[@]}" 00:05:42.048 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:05:42.048 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:42.048 + for nvme in "${!nvme_files[@]}" 00:05:42.048 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:05:42.048 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:42.048 + for nvme in "${!nvme_files[@]}" 00:05:42.048 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:05:42.980 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:42.980 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:05:42.980 + echo 'End stage prepare_nvme.sh' 00:05:42.980 End stage prepare_nvme.sh 00:05:42.992 [Pipeline] sh 00:05:43.348 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:43.348 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:05:43.348 00:05:43.348 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:05:43.348 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:05:43.348 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:05:43.348 HELP=0 00:05:43.348 DRY_RUN=0 00:05:43.348 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:05:43.348 NVME_DISKS_TYPE=nvme,nvme, 00:05:43.348 NVME_AUTO_CREATE=0 00:05:43.348 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:05:43.348 NVME_CMB=,, 00:05:43.348 NVME_PMR=,, 00:05:43.348 NVME_ZNS=,, 00:05:43.348 NVME_MS=,, 00:05:43.348 NVME_FDP=,, 00:05:43.348 SPDK_VAGRANT_DISTRO=fedora39 00:05:43.348 SPDK_VAGRANT_VMCPU=10 00:05:43.348 SPDK_VAGRANT_VMRAM=12288 00:05:43.348 SPDK_VAGRANT_PROVIDER=libvirt 00:05:43.348 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:43.348 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:43.348 SPDK_OPENSTACK_NETWORK=0 00:05:43.348 VAGRANT_PACKAGE_BOX=0 00:05:43.348 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:05:43.348 FORCE_DISTRO=true 00:05:43.348 VAGRANT_BOX_VERSION= 00:05:43.348 EXTRA_VAGRANTFILES= 00:05:43.348 NIC_MODEL=e1000 00:05:43.348 00:05:43.348 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:05:43.348 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:05:46.629 Bringing machine 'default' up with 'libvirt' provider... 00:05:47.564 ==> default: Creating image (snapshot of base box volume). 00:05:47.564 ==> default: Creating domain with the following settings... 00:05:47.564 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727442648_3d233f11146a07fe9688 00:05:47.564 ==> default: -- Domain type: kvm 00:05:47.564 ==> default: -- Cpus: 10 00:05:47.564 ==> default: -- Feature: acpi 00:05:47.564 ==> default: -- Feature: apic 00:05:47.564 ==> default: -- Feature: pae 00:05:47.564 ==> default: -- Memory: 12288M 00:05:47.564 ==> default: -- Memory Backing: hugepages: 00:05:47.564 ==> default: -- Management MAC: 00:05:47.564 ==> default: -- Loader: 00:05:47.564 ==> default: -- Nvram: 00:05:47.564 ==> default: -- Base box: spdk/fedora39 00:05:47.564 ==> default: -- Storage pool: default 00:05:47.564 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727442648_3d233f11146a07fe9688.img (20G) 00:05:47.564 ==> default: -- Volume Cache: default 00:05:47.564 ==> default: -- Kernel: 00:05:47.564 ==> default: -- Initrd: 00:05:47.564 ==> default: -- Graphics Type: vnc 00:05:47.564 ==> default: -- Graphics Port: -1 00:05:47.564 ==> default: -- Graphics IP: 127.0.0.1 00:05:47.564 ==> default: -- Graphics Password: Not defined 00:05:47.564 ==> default: -- Video Type: cirrus 00:05:47.564 ==> default: -- Video VRAM: 9216 00:05:47.564 ==> default: -- Sound Type: 00:05:47.564 ==> default: -- Keymap: en-us 00:05:47.564 ==> default: -- TPM Path: 00:05:47.564 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:47.564 ==> default: -- Command line args: 00:05:47.564 ==> default: -> value=-device, 00:05:47.564 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:47.564 ==> default: -> value=-drive, 00:05:47.564 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:05:47.564 ==> default: -> value=-device, 00:05:47.564 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:47.564 ==> default: -> value=-device, 00:05:47.564 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:47.564 ==> default: -> value=-drive, 00:05:47.564 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:05:47.564 ==> default: -> value=-device, 00:05:47.564 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:47.564 ==> default: -> value=-drive, 00:05:47.564 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:05:47.564 ==> default: -> value=-device, 00:05:47.564 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:47.564 ==> default: -> value=-drive, 00:05:47.564 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:05:47.564 ==> default: -> value=-device, 00:05:47.564 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:47.565 ==> default: Creating shared folders metadata... 00:05:47.565 ==> default: Starting domain. 00:05:48.943 ==> default: Waiting for domain to get an IP address... 00:06:07.027 ==> default: Waiting for SSH to become available... 00:06:07.027 ==> default: Configuring and enabling network interfaces... 00:06:09.571 default: SSH address: 192.168.121.76:22 00:06:09.571 default: SSH username: vagrant 00:06:09.571 default: SSH auth method: private key 00:06:11.472 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:19.650 ==> default: Mounting SSHFS shared folder... 00:06:20.586 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:20.586 ==> default: Checking Mount.. 00:06:21.521 ==> default: Folder Successfully Mounted! 00:06:21.521 ==> default: Running provisioner: file... 00:06:22.457 default: ~/.gitconfig => .gitconfig 00:06:22.716 00:06:22.716 SUCCESS! 00:06:22.716 00:06:22.716 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:06:22.716 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:22.716 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:06:22.716 00:06:22.725 [Pipeline] } 00:06:22.738 [Pipeline] // stage 00:06:22.749 [Pipeline] dir 00:06:22.749 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:06:22.751 [Pipeline] { 00:06:22.763 [Pipeline] catchError 00:06:22.764 [Pipeline] { 00:06:22.776 [Pipeline] sh 00:06:23.057 + vagrant ssh-config --host vagrant 00:06:23.057 + sed -ne /^Host/,$p 00:06:23.057 + tee ssh_conf 00:06:26.370 Host vagrant 00:06:26.370 HostName 192.168.121.76 00:06:26.370 User vagrant 00:06:26.370 Port 22 00:06:26.370 UserKnownHostsFile /dev/null 00:06:26.370 StrictHostKeyChecking no 00:06:26.370 PasswordAuthentication no 00:06:26.370 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:26.370 IdentitiesOnly yes 00:06:26.371 LogLevel FATAL 00:06:26.371 ForwardAgent yes 00:06:26.371 ForwardX11 yes 00:06:26.371 00:06:26.385 [Pipeline] withEnv 00:06:26.387 [Pipeline] { 00:06:26.400 [Pipeline] sh 00:06:26.680 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:26.680 source /etc/os-release 00:06:26.680 [[ -e /image.version ]] && img=$(< /image.version) 00:06:26.680 # Minimal, systemd-like check. 00:06:26.680 if [[ -e /.dockerenv ]]; then 00:06:26.680 # Clear garbage from the node's name: 00:06:26.680 # agt-er_autotest_547-896 -> autotest_547-896 00:06:26.680 # $HOSTNAME is the actual container id 00:06:26.680 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:26.680 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:26.680 # We can assume this is a mount from a host where container is running, 00:06:26.680 # so fetch its hostname to easily identify the target swarm worker. 00:06:26.680 container="$(< /etc/hostname) ($agent)" 00:06:26.680 else 00:06:26.680 # Fallback 00:06:26.680 container=$agent 00:06:26.680 fi 00:06:26.680 fi 00:06:26.680 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:26.680 00:06:26.692 [Pipeline] } 00:06:26.708 [Pipeline] // withEnv 00:06:26.717 [Pipeline] setCustomBuildProperty 00:06:26.732 [Pipeline] stage 00:06:26.734 [Pipeline] { (Tests) 00:06:26.751 [Pipeline] sh 00:06:27.031 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:27.304 [Pipeline] sh 00:06:27.584 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:27.600 [Pipeline] timeout 00:06:27.600 Timeout set to expire in 1 hr 0 min 00:06:27.602 [Pipeline] { 00:06:27.618 [Pipeline] sh 00:06:27.897 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:28.464 HEAD is now at 71dc0c1e9 test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP 00:06:28.476 [Pipeline] sh 00:06:28.756 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:29.028 [Pipeline] sh 00:06:29.309 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:29.583 [Pipeline] sh 00:06:29.860 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:06:30.118 ++ readlink -f spdk_repo 00:06:30.118 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:30.118 + [[ -n /home/vagrant/spdk_repo ]] 00:06:30.118 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:30.118 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:30.118 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:30.118 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:30.118 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:30.118 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:06:30.118 + cd /home/vagrant/spdk_repo 00:06:30.118 + source /etc/os-release 00:06:30.118 ++ NAME='Fedora Linux' 00:06:30.118 ++ VERSION='39 (Cloud Edition)' 00:06:30.118 ++ ID=fedora 00:06:30.118 ++ VERSION_ID=39 00:06:30.118 ++ VERSION_CODENAME= 00:06:30.118 ++ PLATFORM_ID=platform:f39 00:06:30.118 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:30.118 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:30.118 ++ LOGO=fedora-logo-icon 00:06:30.118 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:30.118 ++ HOME_URL=https://fedoraproject.org/ 00:06:30.118 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:30.118 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:30.118 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:30.118 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:30.118 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:30.118 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:30.118 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:30.118 ++ SUPPORT_END=2024-11-12 00:06:30.118 ++ VARIANT='Cloud Edition' 00:06:30.118 ++ VARIANT_ID=cloud 00:06:30.118 + uname -a 00:06:30.118 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:30.118 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:30.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:30.375 Hugepages 00:06:30.375 node hugesize free / total 00:06:30.375 node0 1048576kB 0 / 0 00:06:30.375 node0 2048kB 0 / 0 00:06:30.375 00:06:30.375 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:30.375 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:30.375 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:30.633 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:30.633 + rm -f /tmp/spdk-ld-path 00:06:30.633 + source autorun-spdk.conf 00:06:30.633 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:30.633 ++ SPDK_TEST_NVMF=1 00:06:30.633 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:30.633 ++ SPDK_TEST_URING=1 00:06:30.633 ++ SPDK_TEST_USDT=1 00:06:30.633 ++ SPDK_RUN_UBSAN=1 00:06:30.633 ++ NET_TYPE=virt 00:06:30.633 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:30.633 ++ RUN_NIGHTLY=0 00:06:30.633 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:30.633 + [[ -n '' ]] 00:06:30.633 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:30.633 + for M in /var/spdk/build-*-manifest.txt 00:06:30.633 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:30.633 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:30.633 + for M in /var/spdk/build-*-manifest.txt 00:06:30.633 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:30.633 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:30.633 + for M in /var/spdk/build-*-manifest.txt 00:06:30.633 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:30.633 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:30.633 ++ uname 00:06:30.633 + [[ Linux == \L\i\n\u\x ]] 00:06:30.633 + sudo dmesg -T 00:06:30.633 + sudo dmesg --clear 00:06:30.633 + dmesg_pid=5258 00:06:30.633 + sudo dmesg -Tw 00:06:30.633 + [[ Fedora Linux == FreeBSD ]] 00:06:30.633 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:30.633 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:30.633 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:30.633 + [[ -x /usr/src/fio-static/fio ]] 00:06:30.633 + export FIO_BIN=/usr/src/fio-static/fio 00:06:30.633 + FIO_BIN=/usr/src/fio-static/fio 00:06:30.633 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:30.633 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:30.633 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:30.633 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:30.633 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:30.633 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:30.633 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:30.633 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:30.633 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:30.633 Test configuration: 00:06:30.633 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:30.633 SPDK_TEST_NVMF=1 00:06:30.633 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:30.633 SPDK_TEST_URING=1 00:06:30.633 SPDK_TEST_USDT=1 00:06:30.633 SPDK_RUN_UBSAN=1 00:06:30.633 NET_TYPE=virt 00:06:30.633 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:30.633 RUN_NIGHTLY=0 13:11:32 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:06:30.633 13:11:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.633 13:11:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:30.633 13:11:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:30.633 13:11:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.633 13:11:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.633 13:11:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.633 13:11:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.633 13:11:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.633 13:11:32 -- paths/export.sh@5 -- $ export PATH 00:06:30.633 13:11:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.633 13:11:32 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:30.633 13:11:32 -- common/autobuild_common.sh@479 -- $ date +%s 00:06:30.633 13:11:32 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727442692.XXXXXX 00:06:30.633 13:11:32 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727442692.iTvQb3 00:06:30.633 13:11:32 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:06:30.633 13:11:32 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:06:30.633 13:11:32 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:30.633 13:11:32 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:30.633 13:11:32 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:30.633 13:11:32 -- common/autobuild_common.sh@495 -- $ get_config_params 00:06:30.633 13:11:32 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:06:30.633 13:11:32 -- common/autotest_common.sh@10 -- $ set +x 00:06:30.633 13:11:32 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:06:30.633 13:11:32 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:06:30.633 13:11:32 -- pm/common@17 -- $ local monitor 00:06:30.633 13:11:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:30.633 13:11:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:30.633 13:11:32 -- pm/common@25 -- $ sleep 1 00:06:30.891 13:11:32 -- pm/common@21 -- $ date +%s 00:06:30.891 13:11:32 -- pm/common@21 -- $ date +%s 00:06:30.891 13:11:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727442692 00:06:30.891 13:11:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727442692 00:06:30.891 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727442692_collect-cpu-load.pm.log 00:06:30.891 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727442692_collect-vmstat.pm.log 00:06:31.827 13:11:33 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:06:31.827 13:11:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:31.827 13:11:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:31.827 13:11:33 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:31.827 13:11:33 -- spdk/autobuild.sh@16 -- $ date -u 00:06:31.827 Fri Sep 27 01:11:33 PM UTC 2024 00:06:31.827 13:11:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:31.827 v25.01-pre-24-g71dc0c1e9 00:06:31.827 13:11:33 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:31.827 13:11:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:31.827 13:11:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:31.827 13:11:33 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:06:31.827 13:11:33 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:06:31.827 13:11:33 -- common/autotest_common.sh@10 -- $ set +x 00:06:31.827 ************************************ 00:06:31.827 START TEST ubsan 00:06:31.827 ************************************ 00:06:31.827 using ubsan 00:06:31.827 13:11:33 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:06:31.827 00:06:31.827 real 0m0.000s 00:06:31.827 user 0m0.000s 00:06:31.827 sys 0m0.000s 00:06:31.827 13:11:33 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:31.827 13:11:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:31.827 ************************************ 00:06:31.827 END TEST ubsan 00:06:31.827 ************************************ 00:06:31.827 13:11:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:31.827 13:11:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:31.827 13:11:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:31.827 13:11:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:31.827 13:11:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:31.827 13:11:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:31.827 13:11:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:31.827 13:11:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:31.827 13:11:33 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:06:31.827 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:31.827 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:32.395 Using 'verbs' RDMA provider 00:06:45.536 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:00.432 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:00.432 Creating mk/config.mk...done. 00:07:00.432 Creating mk/cc.flags.mk...done. 00:07:00.432 Type 'make' to build. 00:07:00.432 13:12:00 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:07:00.432 13:12:00 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:07:00.432 13:12:00 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:07:00.432 13:12:00 -- common/autotest_common.sh@10 -- $ set +x 00:07:00.432 ************************************ 00:07:00.432 START TEST make 00:07:00.432 ************************************ 00:07:00.432 13:12:00 make -- common/autotest_common.sh@1125 -- $ make -j10 00:07:00.432 make[1]: Nothing to be done for 'all'. 00:07:12.644 The Meson build system 00:07:12.644 Version: 1.5.0 00:07:12.644 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:12.644 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:12.644 Build type: native build 00:07:12.644 Program cat found: YES (/usr/bin/cat) 00:07:12.644 Project name: DPDK 00:07:12.644 Project version: 24.03.0 00:07:12.644 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:12.644 C linker for the host machine: cc ld.bfd 2.40-14 00:07:12.644 Host machine cpu family: x86_64 00:07:12.644 Host machine cpu: x86_64 00:07:12.644 Message: ## Building in Developer Mode ## 00:07:12.644 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:12.644 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:12.644 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:12.644 Program python3 found: YES (/usr/bin/python3) 00:07:12.644 Program cat found: YES (/usr/bin/cat) 00:07:12.644 Compiler for C supports arguments -march=native: YES 00:07:12.644 Checking for size of "void *" : 8 00:07:12.644 Checking for size of "void *" : 8 (cached) 00:07:12.644 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:12.644 Library m found: YES 00:07:12.644 Library numa found: YES 00:07:12.644 Has header "numaif.h" : YES 00:07:12.644 Library fdt found: NO 00:07:12.644 Library execinfo found: NO 00:07:12.644 Has header "execinfo.h" : YES 00:07:12.644 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:12.644 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:12.644 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:12.644 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:12.644 Run-time dependency openssl found: YES 3.1.1 00:07:12.644 Run-time dependency libpcap found: YES 1.10.4 00:07:12.644 Has header "pcap.h" with dependency libpcap: YES 00:07:12.644 Compiler for C supports arguments -Wcast-qual: YES 00:07:12.644 Compiler for C supports arguments -Wdeprecated: YES 00:07:12.644 Compiler for C supports arguments -Wformat: YES 00:07:12.644 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:12.644 Compiler for C supports arguments -Wformat-security: NO 00:07:12.644 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:12.644 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:12.644 Compiler for C supports arguments -Wnested-externs: YES 00:07:12.644 Compiler for C supports arguments -Wold-style-definition: YES 00:07:12.644 Compiler for C supports arguments -Wpointer-arith: YES 00:07:12.644 Compiler for C supports arguments -Wsign-compare: YES 00:07:12.644 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:12.644 Compiler for C supports arguments -Wundef: YES 00:07:12.644 Compiler for C supports arguments -Wwrite-strings: YES 00:07:12.644 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:12.644 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:12.644 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:12.644 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:12.644 Program objdump found: YES (/usr/bin/objdump) 00:07:12.644 Compiler for C supports arguments -mavx512f: YES 00:07:12.644 Checking if "AVX512 checking" compiles: YES 00:07:12.644 Fetching value of define "__SSE4_2__" : 1 00:07:12.644 Fetching value of define "__AES__" : 1 00:07:12.644 Fetching value of define "__AVX__" : 1 00:07:12.644 Fetching value of define "__AVX2__" : 1 00:07:12.644 Fetching value of define "__AVX512BW__" : (undefined) 00:07:12.644 Fetching value of define "__AVX512CD__" : (undefined) 00:07:12.644 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:12.644 Fetching value of define "__AVX512F__" : (undefined) 00:07:12.644 Fetching value of define "__AVX512VL__" : (undefined) 00:07:12.644 Fetching value of define "__PCLMUL__" : 1 00:07:12.644 Fetching value of define "__RDRND__" : 1 00:07:12.644 Fetching value of define "__RDSEED__" : 1 00:07:12.644 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:12.644 Fetching value of define "__znver1__" : (undefined) 00:07:12.644 Fetching value of define "__znver2__" : (undefined) 00:07:12.644 Fetching value of define "__znver3__" : (undefined) 00:07:12.644 Fetching value of define "__znver4__" : (undefined) 00:07:12.644 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:12.644 Message: lib/log: Defining dependency "log" 00:07:12.644 Message: lib/kvargs: Defining dependency "kvargs" 00:07:12.644 Message: lib/telemetry: Defining dependency "telemetry" 00:07:12.644 Checking for function "getentropy" : NO 00:07:12.644 Message: lib/eal: Defining dependency "eal" 00:07:12.644 Message: lib/ring: Defining dependency "ring" 00:07:12.644 Message: lib/rcu: Defining dependency "rcu" 00:07:12.644 Message: lib/mempool: Defining dependency "mempool" 00:07:12.644 Message: lib/mbuf: Defining dependency "mbuf" 00:07:12.644 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:12.644 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:12.644 Compiler for C supports arguments -mpclmul: YES 00:07:12.644 Compiler for C supports arguments -maes: YES 00:07:12.644 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:12.644 Compiler for C supports arguments -mavx512bw: YES 00:07:12.644 Compiler for C supports arguments -mavx512dq: YES 00:07:12.644 Compiler for C supports arguments -mavx512vl: YES 00:07:12.644 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:12.644 Compiler for C supports arguments -mavx2: YES 00:07:12.644 Compiler for C supports arguments -mavx: YES 00:07:12.644 Message: lib/net: Defining dependency "net" 00:07:12.644 Message: lib/meter: Defining dependency "meter" 00:07:12.644 Message: lib/ethdev: Defining dependency "ethdev" 00:07:12.644 Message: lib/pci: Defining dependency "pci" 00:07:12.644 Message: lib/cmdline: Defining dependency "cmdline" 00:07:12.644 Message: lib/hash: Defining dependency "hash" 00:07:12.644 Message: lib/timer: Defining dependency "timer" 00:07:12.644 Message: lib/compressdev: Defining dependency "compressdev" 00:07:12.644 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:12.644 Message: lib/dmadev: Defining dependency "dmadev" 00:07:12.644 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:12.644 Message: lib/power: Defining dependency "power" 00:07:12.644 Message: lib/reorder: Defining dependency "reorder" 00:07:12.644 Message: lib/security: Defining dependency "security" 00:07:12.644 Has header "linux/userfaultfd.h" : YES 00:07:12.644 Has header "linux/vduse.h" : YES 00:07:12.644 Message: lib/vhost: Defining dependency "vhost" 00:07:12.644 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:12.645 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:12.645 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:12.645 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:12.645 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:12.645 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:12.645 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:12.645 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:12.645 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:12.645 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:12.645 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:12.645 Configuring doxy-api-html.conf using configuration 00:07:12.645 Configuring doxy-api-man.conf using configuration 00:07:12.645 Program mandb found: YES (/usr/bin/mandb) 00:07:12.645 Program sphinx-build found: NO 00:07:12.645 Configuring rte_build_config.h using configuration 00:07:12.645 Message: 00:07:12.645 ================= 00:07:12.645 Applications Enabled 00:07:12.645 ================= 00:07:12.645 00:07:12.645 apps: 00:07:12.645 00:07:12.645 00:07:12.645 Message: 00:07:12.645 ================= 00:07:12.645 Libraries Enabled 00:07:12.645 ================= 00:07:12.645 00:07:12.645 libs: 00:07:12.645 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:12.645 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:12.645 cryptodev, dmadev, power, reorder, security, vhost, 00:07:12.645 00:07:12.645 Message: 00:07:12.645 =============== 00:07:12.645 Drivers Enabled 00:07:12.645 =============== 00:07:12.645 00:07:12.645 common: 00:07:12.645 00:07:12.645 bus: 00:07:12.645 pci, vdev, 00:07:12.645 mempool: 00:07:12.645 ring, 00:07:12.645 dma: 00:07:12.645 00:07:12.645 net: 00:07:12.645 00:07:12.645 crypto: 00:07:12.645 00:07:12.645 compress: 00:07:12.645 00:07:12.645 vdpa: 00:07:12.645 00:07:12.645 00:07:12.645 Message: 00:07:12.645 ================= 00:07:12.645 Content Skipped 00:07:12.645 ================= 00:07:12.645 00:07:12.645 apps: 00:07:12.645 dumpcap: explicitly disabled via build config 00:07:12.645 graph: explicitly disabled via build config 00:07:12.645 pdump: explicitly disabled via build config 00:07:12.645 proc-info: explicitly disabled via build config 00:07:12.645 test-acl: explicitly disabled via build config 00:07:12.645 test-bbdev: explicitly disabled via build config 00:07:12.645 test-cmdline: explicitly disabled via build config 00:07:12.645 test-compress-perf: explicitly disabled via build config 00:07:12.645 test-crypto-perf: explicitly disabled via build config 00:07:12.645 test-dma-perf: explicitly disabled via build config 00:07:12.645 test-eventdev: explicitly disabled via build config 00:07:12.645 test-fib: explicitly disabled via build config 00:07:12.645 test-flow-perf: explicitly disabled via build config 00:07:12.645 test-gpudev: explicitly disabled via build config 00:07:12.645 test-mldev: explicitly disabled via build config 00:07:12.645 test-pipeline: explicitly disabled via build config 00:07:12.645 test-pmd: explicitly disabled via build config 00:07:12.645 test-regex: explicitly disabled via build config 00:07:12.645 test-sad: explicitly disabled via build config 00:07:12.645 test-security-perf: explicitly disabled via build config 00:07:12.645 00:07:12.645 libs: 00:07:12.645 argparse: explicitly disabled via build config 00:07:12.645 metrics: explicitly disabled via build config 00:07:12.645 acl: explicitly disabled via build config 00:07:12.645 bbdev: explicitly disabled via build config 00:07:12.645 bitratestats: explicitly disabled via build config 00:07:12.645 bpf: explicitly disabled via build config 00:07:12.645 cfgfile: explicitly disabled via build config 00:07:12.645 distributor: explicitly disabled via build config 00:07:12.645 efd: explicitly disabled via build config 00:07:12.645 eventdev: explicitly disabled via build config 00:07:12.645 dispatcher: explicitly disabled via build config 00:07:12.645 gpudev: explicitly disabled via build config 00:07:12.645 gro: explicitly disabled via build config 00:07:12.645 gso: explicitly disabled via build config 00:07:12.645 ip_frag: explicitly disabled via build config 00:07:12.645 jobstats: explicitly disabled via build config 00:07:12.645 latencystats: explicitly disabled via build config 00:07:12.645 lpm: explicitly disabled via build config 00:07:12.645 member: explicitly disabled via build config 00:07:12.645 pcapng: explicitly disabled via build config 00:07:12.645 rawdev: explicitly disabled via build config 00:07:12.645 regexdev: explicitly disabled via build config 00:07:12.645 mldev: explicitly disabled via build config 00:07:12.645 rib: explicitly disabled via build config 00:07:12.645 sched: explicitly disabled via build config 00:07:12.645 stack: explicitly disabled via build config 00:07:12.645 ipsec: explicitly disabled via build config 00:07:12.645 pdcp: explicitly disabled via build config 00:07:12.645 fib: explicitly disabled via build config 00:07:12.645 port: explicitly disabled via build config 00:07:12.645 pdump: explicitly disabled via build config 00:07:12.645 table: explicitly disabled via build config 00:07:12.645 pipeline: explicitly disabled via build config 00:07:12.645 graph: explicitly disabled via build config 00:07:12.645 node: explicitly disabled via build config 00:07:12.645 00:07:12.645 drivers: 00:07:12.645 common/cpt: not in enabled drivers build config 00:07:12.645 common/dpaax: not in enabled drivers build config 00:07:12.645 common/iavf: not in enabled drivers build config 00:07:12.645 common/idpf: not in enabled drivers build config 00:07:12.645 common/ionic: not in enabled drivers build config 00:07:12.645 common/mvep: not in enabled drivers build config 00:07:12.645 common/octeontx: not in enabled drivers build config 00:07:12.645 bus/auxiliary: not in enabled drivers build config 00:07:12.645 bus/cdx: not in enabled drivers build config 00:07:12.645 bus/dpaa: not in enabled drivers build config 00:07:12.645 bus/fslmc: not in enabled drivers build config 00:07:12.645 bus/ifpga: not in enabled drivers build config 00:07:12.645 bus/platform: not in enabled drivers build config 00:07:12.645 bus/uacce: not in enabled drivers build config 00:07:12.645 bus/vmbus: not in enabled drivers build config 00:07:12.645 common/cnxk: not in enabled drivers build config 00:07:12.645 common/mlx5: not in enabled drivers build config 00:07:12.645 common/nfp: not in enabled drivers build config 00:07:12.645 common/nitrox: not in enabled drivers build config 00:07:12.645 common/qat: not in enabled drivers build config 00:07:12.645 common/sfc_efx: not in enabled drivers build config 00:07:12.645 mempool/bucket: not in enabled drivers build config 00:07:12.645 mempool/cnxk: not in enabled drivers build config 00:07:12.645 mempool/dpaa: not in enabled drivers build config 00:07:12.645 mempool/dpaa2: not in enabled drivers build config 00:07:12.645 mempool/octeontx: not in enabled drivers build config 00:07:12.645 mempool/stack: not in enabled drivers build config 00:07:12.645 dma/cnxk: not in enabled drivers build config 00:07:12.645 dma/dpaa: not in enabled drivers build config 00:07:12.645 dma/dpaa2: not in enabled drivers build config 00:07:12.645 dma/hisilicon: not in enabled drivers build config 00:07:12.645 dma/idxd: not in enabled drivers build config 00:07:12.645 dma/ioat: not in enabled drivers build config 00:07:12.645 dma/skeleton: not in enabled drivers build config 00:07:12.645 net/af_packet: not in enabled drivers build config 00:07:12.645 net/af_xdp: not in enabled drivers build config 00:07:12.645 net/ark: not in enabled drivers build config 00:07:12.645 net/atlantic: not in enabled drivers build config 00:07:12.645 net/avp: not in enabled drivers build config 00:07:12.645 net/axgbe: not in enabled drivers build config 00:07:12.645 net/bnx2x: not in enabled drivers build config 00:07:12.645 net/bnxt: not in enabled drivers build config 00:07:12.645 net/bonding: not in enabled drivers build config 00:07:12.645 net/cnxk: not in enabled drivers build config 00:07:12.645 net/cpfl: not in enabled drivers build config 00:07:12.645 net/cxgbe: not in enabled drivers build config 00:07:12.645 net/dpaa: not in enabled drivers build config 00:07:12.645 net/dpaa2: not in enabled drivers build config 00:07:12.645 net/e1000: not in enabled drivers build config 00:07:12.645 net/ena: not in enabled drivers build config 00:07:12.645 net/enetc: not in enabled drivers build config 00:07:12.645 net/enetfec: not in enabled drivers build config 00:07:12.645 net/enic: not in enabled drivers build config 00:07:12.645 net/failsafe: not in enabled drivers build config 00:07:12.645 net/fm10k: not in enabled drivers build config 00:07:12.645 net/gve: not in enabled drivers build config 00:07:12.646 net/hinic: not in enabled drivers build config 00:07:12.646 net/hns3: not in enabled drivers build config 00:07:12.646 net/i40e: not in enabled drivers build config 00:07:12.646 net/iavf: not in enabled drivers build config 00:07:12.646 net/ice: not in enabled drivers build config 00:07:12.646 net/idpf: not in enabled drivers build config 00:07:12.646 net/igc: not in enabled drivers build config 00:07:12.646 net/ionic: not in enabled drivers build config 00:07:12.646 net/ipn3ke: not in enabled drivers build config 00:07:12.646 net/ixgbe: not in enabled drivers build config 00:07:12.646 net/mana: not in enabled drivers build config 00:07:12.646 net/memif: not in enabled drivers build config 00:07:12.646 net/mlx4: not in enabled drivers build config 00:07:12.646 net/mlx5: not in enabled drivers build config 00:07:12.646 net/mvneta: not in enabled drivers build config 00:07:12.646 net/mvpp2: not in enabled drivers build config 00:07:12.646 net/netvsc: not in enabled drivers build config 00:07:12.646 net/nfb: not in enabled drivers build config 00:07:12.646 net/nfp: not in enabled drivers build config 00:07:12.646 net/ngbe: not in enabled drivers build config 00:07:12.646 net/null: not in enabled drivers build config 00:07:12.646 net/octeontx: not in enabled drivers build config 00:07:12.646 net/octeon_ep: not in enabled drivers build config 00:07:12.646 net/pcap: not in enabled drivers build config 00:07:12.646 net/pfe: not in enabled drivers build config 00:07:12.646 net/qede: not in enabled drivers build config 00:07:12.646 net/ring: not in enabled drivers build config 00:07:12.646 net/sfc: not in enabled drivers build config 00:07:12.646 net/softnic: not in enabled drivers build config 00:07:12.646 net/tap: not in enabled drivers build config 00:07:12.646 net/thunderx: not in enabled drivers build config 00:07:12.646 net/txgbe: not in enabled drivers build config 00:07:12.646 net/vdev_netvsc: not in enabled drivers build config 00:07:12.646 net/vhost: not in enabled drivers build config 00:07:12.646 net/virtio: not in enabled drivers build config 00:07:12.646 net/vmxnet3: not in enabled drivers build config 00:07:12.646 raw/*: missing internal dependency, "rawdev" 00:07:12.646 crypto/armv8: not in enabled drivers build config 00:07:12.646 crypto/bcmfs: not in enabled drivers build config 00:07:12.646 crypto/caam_jr: not in enabled drivers build config 00:07:12.646 crypto/ccp: not in enabled drivers build config 00:07:12.646 crypto/cnxk: not in enabled drivers build config 00:07:12.646 crypto/dpaa_sec: not in enabled drivers build config 00:07:12.646 crypto/dpaa2_sec: not in enabled drivers build config 00:07:12.646 crypto/ipsec_mb: not in enabled drivers build config 00:07:12.646 crypto/mlx5: not in enabled drivers build config 00:07:12.646 crypto/mvsam: not in enabled drivers build config 00:07:12.646 crypto/nitrox: not in enabled drivers build config 00:07:12.646 crypto/null: not in enabled drivers build config 00:07:12.646 crypto/octeontx: not in enabled drivers build config 00:07:12.646 crypto/openssl: not in enabled drivers build config 00:07:12.646 crypto/scheduler: not in enabled drivers build config 00:07:12.646 crypto/uadk: not in enabled drivers build config 00:07:12.646 crypto/virtio: not in enabled drivers build config 00:07:12.646 compress/isal: not in enabled drivers build config 00:07:12.646 compress/mlx5: not in enabled drivers build config 00:07:12.646 compress/nitrox: not in enabled drivers build config 00:07:12.646 compress/octeontx: not in enabled drivers build config 00:07:12.646 compress/zlib: not in enabled drivers build config 00:07:12.646 regex/*: missing internal dependency, "regexdev" 00:07:12.646 ml/*: missing internal dependency, "mldev" 00:07:12.646 vdpa/ifc: not in enabled drivers build config 00:07:12.646 vdpa/mlx5: not in enabled drivers build config 00:07:12.646 vdpa/nfp: not in enabled drivers build config 00:07:12.646 vdpa/sfc: not in enabled drivers build config 00:07:12.646 event/*: missing internal dependency, "eventdev" 00:07:12.646 baseband/*: missing internal dependency, "bbdev" 00:07:12.646 gpu/*: missing internal dependency, "gpudev" 00:07:12.646 00:07:12.646 00:07:12.646 Build targets in project: 85 00:07:12.646 00:07:12.646 DPDK 24.03.0 00:07:12.646 00:07:12.646 User defined options 00:07:12.646 buildtype : debug 00:07:12.646 default_library : shared 00:07:12.646 libdir : lib 00:07:12.646 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:12.646 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:12.646 c_link_args : 00:07:12.646 cpu_instruction_set: native 00:07:12.646 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:12.646 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:12.646 enable_docs : false 00:07:12.646 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:07:12.646 enable_kmods : false 00:07:12.646 max_lcores : 128 00:07:12.646 tests : false 00:07:12.646 00:07:12.646 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:12.646 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:12.646 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:12.646 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:12.646 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:12.646 [4/268] Linking static target lib/librte_kvargs.a 00:07:12.646 [5/268] Linking static target lib/librte_log.a 00:07:12.646 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:12.904 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.904 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:12.904 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:12.904 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:13.162 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:13.162 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:13.162 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:13.162 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:13.162 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.162 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:13.162 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:13.162 [18/268] Linking target lib/librte_log.so.24.1 00:07:13.162 [19/268] Linking static target lib/librte_telemetry.a 00:07:13.162 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:13.420 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:13.678 [22/268] Linking target lib/librte_kvargs.so.24.1 00:07:13.937 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:13.937 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:13.937 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:13.937 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:13.937 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:13.937 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:13.937 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:14.194 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:14.194 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:14.194 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:14.194 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.194 [34/268] Linking target lib/librte_telemetry.so.24.1 00:07:14.451 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:14.451 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:14.709 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:14.710 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:14.710 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:14.968 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:14.968 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:14.968 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:14.968 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:14.968 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:14.968 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:14.968 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:15.226 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:15.485 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:15.485 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:15.743 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:15.743 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:15.743 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:15.743 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:16.002 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:16.002 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:16.002 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:16.002 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:16.260 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:16.260 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:16.517 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:16.517 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:16.517 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:16.517 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:16.517 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:16.775 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:17.034 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:17.034 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:17.034 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:17.292 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:17.292 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:17.550 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:17.550 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:17.550 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:17.550 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:17.550 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:17.550 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:17.808 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:17.808 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:17.808 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:18.066 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:18.067 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:18.067 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:18.325 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:18.325 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:18.325 [85/268] Linking static target lib/librte_ring.a 00:07:18.325 [86/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:18.325 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:18.325 [88/268] Linking static target lib/librte_rcu.a 00:07:18.325 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:18.325 [90/268] Linking static target lib/librte_eal.a 00:07:18.892 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:18.892 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.892 [93/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.892 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:18.892 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:18.892 [96/268] Linking static target lib/librte_mempool.a 00:07:18.892 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:19.150 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:19.150 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:19.150 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:19.409 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:19.409 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:19.409 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:19.667 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:19.667 [105/268] Linking static target lib/librte_mbuf.a 00:07:19.667 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:19.667 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:19.926 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:19.926 [109/268] Linking static target lib/librte_net.a 00:07:19.926 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:20.185 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:20.185 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:20.185 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.185 [114/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:20.185 [115/268] Linking static target lib/librte_meter.a 00:07:20.443 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:20.443 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.702 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.702 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.961 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:20.961 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:21.219 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:21.219 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:21.477 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:21.477 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:21.477 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:21.477 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:21.477 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:21.477 [129/268] Linking static target lib/librte_pci.a 00:07:21.736 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:21.736 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:21.736 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:21.736 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:21.736 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:21.737 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:21.995 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:21.995 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.995 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:21.995 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:21.995 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:21.995 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:21.995 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:21.995 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:21.995 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:21.995 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:22.254 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:22.254 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:22.254 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:22.254 [149/268] Linking static target lib/librte_cmdline.a 00:07:22.254 [150/268] Linking static target lib/librte_ethdev.a 00:07:22.512 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:22.512 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:22.771 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:22.771 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:22.771 [155/268] Linking static target lib/librte_timer.a 00:07:22.771 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:23.030 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:23.030 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:23.030 [159/268] Linking static target lib/librte_hash.a 00:07:23.289 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:23.289 [161/268] Linking static target lib/librte_compressdev.a 00:07:23.289 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:23.289 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:23.547 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:23.547 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:23.547 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:23.547 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:23.806 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:23.806 [169/268] Linking static target lib/librte_dmadev.a 00:07:24.064 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.064 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:24.064 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:24.064 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:24.321 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:24.321 [175/268] Linking static target lib/librte_cryptodev.a 00:07:24.321 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.321 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:24.321 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.888 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.888 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:24.888 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:24.888 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:24.888 [183/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:24.888 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:24.888 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:24.888 [186/268] Linking static target lib/librte_power.a 00:07:25.148 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:25.148 [188/268] Linking static target lib/librte_reorder.a 00:07:25.407 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:25.666 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:25.666 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:25.666 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:25.666 [193/268] Linking static target lib/librte_security.a 00:07:25.666 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.923 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:26.182 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:26.440 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:26.440 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:26.440 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:26.697 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:26.697 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:26.697 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:26.954 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:26.954 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:26.954 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:27.211 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:27.212 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:27.468 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:27.468 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:27.468 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:27.468 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:27.468 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:27.726 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:27.726 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:27.726 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:27.726 [216/268] Linking static target drivers/librte_bus_vdev.a 00:07:27.726 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:27.726 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:27.726 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:27.726 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:27.726 [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:27.726 [222/268] Linking static target drivers/librte_bus_pci.a 00:07:27.983 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:27.984 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:27.984 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:27.984 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:27.984 [227/268] Linking static target drivers/librte_mempool_ring.a 00:07:28.241 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:28.808 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:29.066 [230/268] Linking static target lib/librte_vhost.a 00:07:29.633 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:29.633 [232/268] Linking target lib/librte_eal.so.24.1 00:07:29.891 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:29.891 [234/268] Linking target lib/librte_ring.so.24.1 00:07:29.891 [235/268] Linking target lib/librte_timer.so.24.1 00:07:29.891 [236/268] Linking target lib/librte_pci.so.24.1 00:07:29.891 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:29.891 [238/268] Linking target lib/librte_meter.so.24.1 00:07:29.891 [239/268] Linking target lib/librte_dmadev.so.24.1 00:07:29.891 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:29.891 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:29.891 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:30.228 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:30.228 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:30.228 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:30.228 [246/268] Linking target lib/librte_rcu.so.24.1 00:07:30.228 [247/268] Linking target lib/librte_mempool.so.24.1 00:07:30.228 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:30.228 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:30.228 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:30.228 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:30.228 [252/268] Linking target lib/librte_mbuf.so.24.1 00:07:30.486 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:30.486 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:30.486 [255/268] Linking target lib/librte_net.so.24.1 00:07:30.486 [256/268] Linking target lib/librte_compressdev.so.24.1 00:07:30.486 [257/268] Linking target lib/librte_reorder.so.24.1 00:07:30.486 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:07:30.745 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:30.745 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:30.745 [261/268] Linking target lib/librte_cmdline.so.24.1 00:07:30.745 [262/268] Linking target lib/librte_security.so.24.1 00:07:30.745 [263/268] Linking target lib/librte_hash.so.24.1 00:07:30.745 [264/268] Linking target lib/librte_ethdev.so.24.1 00:07:30.745 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:30.745 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:31.003 [267/268] Linking target lib/librte_power.so.24.1 00:07:31.003 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:31.003 INFO: autodetecting backend as ninja 00:07:31.003 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:57.536 CC lib/ut_mock/mock.o 00:07:57.536 CC lib/log/log.o 00:07:57.536 CC lib/log/log_deprecated.o 00:07:57.536 CC lib/log/log_flags.o 00:07:57.536 CC lib/ut/ut.o 00:07:57.536 LIB libspdk_ut.a 00:07:57.536 LIB libspdk_ut_mock.a 00:07:57.536 LIB libspdk_log.a 00:07:57.536 SO libspdk_ut.so.2.0 00:07:57.536 SO libspdk_ut_mock.so.6.0 00:07:57.536 SO libspdk_log.so.7.0 00:07:57.536 SYMLINK libspdk_ut.so 00:07:57.536 SYMLINK libspdk_ut_mock.so 00:07:57.536 SYMLINK libspdk_log.so 00:07:57.536 CC lib/dma/dma.o 00:07:57.536 CC lib/ioat/ioat.o 00:07:57.536 CC lib/util/base64.o 00:07:57.536 CC lib/util/bit_array.o 00:07:57.536 CC lib/util/cpuset.o 00:07:57.536 CC lib/util/crc16.o 00:07:57.536 CC lib/util/crc32.o 00:07:57.536 CC lib/util/crc32c.o 00:07:57.536 CXX lib/trace_parser/trace.o 00:07:57.536 CC lib/vfio_user/host/vfio_user_pci.o 00:07:57.536 CC lib/util/crc32_ieee.o 00:07:57.536 CC lib/util/crc64.o 00:07:57.536 CC lib/util/dif.o 00:07:57.536 CC lib/util/fd.o 00:07:57.536 LIB libspdk_dma.a 00:07:57.536 CC lib/util/fd_group.o 00:07:57.536 CC lib/util/file.o 00:07:57.536 SO libspdk_dma.so.5.0 00:07:57.536 LIB libspdk_ioat.a 00:07:57.536 SO libspdk_ioat.so.7.0 00:07:57.536 SYMLINK libspdk_dma.so 00:07:57.536 CC lib/vfio_user/host/vfio_user.o 00:07:57.536 CC lib/util/hexlify.o 00:07:57.536 CC lib/util/iov.o 00:07:57.536 SYMLINK libspdk_ioat.so 00:07:57.536 CC lib/util/math.o 00:07:57.536 CC lib/util/net.o 00:07:57.536 CC lib/util/pipe.o 00:07:57.536 CC lib/util/strerror_tls.o 00:07:57.536 CC lib/util/string.o 00:07:57.536 CC lib/util/uuid.o 00:07:57.536 CC lib/util/xor.o 00:07:57.536 LIB libspdk_vfio_user.a 00:07:57.536 CC lib/util/zipf.o 00:07:57.536 CC lib/util/md5.o 00:07:57.536 SO libspdk_vfio_user.so.5.0 00:07:57.536 SYMLINK libspdk_vfio_user.so 00:07:57.536 LIB libspdk_util.a 00:07:57.536 SO libspdk_util.so.10.0 00:07:57.536 LIB libspdk_trace_parser.a 00:07:57.536 SO libspdk_trace_parser.so.6.0 00:07:57.536 SYMLINK libspdk_util.so 00:07:57.536 SYMLINK libspdk_trace_parser.so 00:07:57.536 CC lib/conf/conf.o 00:07:57.536 CC lib/json/json_parse.o 00:07:57.536 CC lib/env_dpdk/env.o 00:07:57.536 CC lib/json/json_util.o 00:07:57.536 CC lib/env_dpdk/pci.o 00:07:57.536 CC lib/env_dpdk/memory.o 00:07:57.536 CC lib/vmd/vmd.o 00:07:57.536 CC lib/idxd/idxd.o 00:07:57.536 CC lib/rdma_provider/common.o 00:07:57.536 CC lib/rdma_utils/rdma_utils.o 00:07:57.536 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:57.536 LIB libspdk_conf.a 00:07:57.536 CC lib/json/json_write.o 00:07:57.536 CC lib/vmd/led.o 00:07:57.536 SO libspdk_conf.so.6.0 00:07:57.536 LIB libspdk_rdma_utils.a 00:07:57.536 SYMLINK libspdk_conf.so 00:07:57.536 CC lib/env_dpdk/init.o 00:07:57.536 SO libspdk_rdma_utils.so.1.0 00:07:57.536 SYMLINK libspdk_rdma_utils.so 00:07:57.536 CC lib/env_dpdk/threads.o 00:07:57.536 CC lib/env_dpdk/pci_ioat.o 00:07:57.536 CC lib/env_dpdk/pci_virtio.o 00:07:57.536 LIB libspdk_rdma_provider.a 00:07:57.795 SO libspdk_rdma_provider.so.6.0 00:07:57.795 SYMLINK libspdk_rdma_provider.so 00:07:57.795 CC lib/env_dpdk/pci_vmd.o 00:07:57.795 CC lib/env_dpdk/pci_idxd.o 00:07:57.795 CC lib/env_dpdk/pci_event.o 00:07:57.795 LIB libspdk_json.a 00:07:57.795 CC lib/env_dpdk/sigbus_handler.o 00:07:57.795 SO libspdk_json.so.6.0 00:07:57.795 CC lib/idxd/idxd_user.o 00:07:57.795 LIB libspdk_vmd.a 00:07:57.795 CC lib/idxd/idxd_kernel.o 00:07:57.795 SYMLINK libspdk_json.so 00:07:58.054 CC lib/env_dpdk/pci_dpdk.o 00:07:58.054 SO libspdk_vmd.so.6.0 00:07:58.054 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:58.054 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:58.054 SYMLINK libspdk_vmd.so 00:07:58.054 LIB libspdk_idxd.a 00:07:58.054 CC lib/jsonrpc/jsonrpc_server.o 00:07:58.054 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:58.054 CC lib/jsonrpc/jsonrpc_client.o 00:07:58.054 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:58.054 SO libspdk_idxd.so.12.1 00:07:58.311 SYMLINK libspdk_idxd.so 00:07:58.311 LIB libspdk_jsonrpc.a 00:07:58.311 SO libspdk_jsonrpc.so.6.0 00:07:58.569 SYMLINK libspdk_jsonrpc.so 00:07:58.827 LIB libspdk_env_dpdk.a 00:07:58.827 CC lib/rpc/rpc.o 00:07:58.827 SO libspdk_env_dpdk.so.15.0 00:07:59.086 SYMLINK libspdk_env_dpdk.so 00:07:59.086 LIB libspdk_rpc.a 00:07:59.086 SO libspdk_rpc.so.6.0 00:07:59.086 SYMLINK libspdk_rpc.so 00:07:59.344 CC lib/keyring/keyring.o 00:07:59.344 CC lib/keyring/keyring_rpc.o 00:07:59.344 CC lib/notify/notify_rpc.o 00:07:59.344 CC lib/notify/notify.o 00:07:59.344 CC lib/trace/trace.o 00:07:59.344 CC lib/trace/trace_flags.o 00:07:59.344 CC lib/trace/trace_rpc.o 00:07:59.603 LIB libspdk_notify.a 00:07:59.603 SO libspdk_notify.so.6.0 00:07:59.603 LIB libspdk_keyring.a 00:07:59.603 SYMLINK libspdk_notify.so 00:07:59.603 LIB libspdk_trace.a 00:07:59.603 SO libspdk_keyring.so.2.0 00:07:59.603 SO libspdk_trace.so.11.0 00:07:59.603 SYMLINK libspdk_keyring.so 00:07:59.603 SYMLINK libspdk_trace.so 00:07:59.862 CC lib/thread/thread.o 00:07:59.862 CC lib/thread/iobuf.o 00:07:59.862 CC lib/sock/sock.o 00:07:59.862 CC lib/sock/sock_rpc.o 00:08:00.430 LIB libspdk_sock.a 00:08:00.430 SO libspdk_sock.so.10.0 00:08:00.689 SYMLINK libspdk_sock.so 00:08:00.947 CC lib/nvme/nvme_ctrlr.o 00:08:00.947 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:00.947 CC lib/nvme/nvme_fabric.o 00:08:00.947 CC lib/nvme/nvme_ns.o 00:08:00.947 CC lib/nvme/nvme_ns_cmd.o 00:08:00.947 CC lib/nvme/nvme_pcie.o 00:08:00.947 CC lib/nvme/nvme_pcie_common.o 00:08:00.947 CC lib/nvme/nvme_qpair.o 00:08:00.947 CC lib/nvme/nvme.o 00:08:01.513 LIB libspdk_thread.a 00:08:01.513 SO libspdk_thread.so.10.1 00:08:01.513 SYMLINK libspdk_thread.so 00:08:01.513 CC lib/nvme/nvme_quirks.o 00:08:01.772 CC lib/nvme/nvme_transport.o 00:08:01.772 CC lib/nvme/nvme_discovery.o 00:08:01.772 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:01.772 CC lib/accel/accel.o 00:08:01.772 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:01.772 CC lib/nvme/nvme_tcp.o 00:08:01.772 CC lib/nvme/nvme_opal.o 00:08:02.031 CC lib/nvme/nvme_io_msg.o 00:08:02.290 CC lib/nvme/nvme_poll_group.o 00:08:02.290 CC lib/nvme/nvme_zns.o 00:08:02.290 CC lib/nvme/nvme_stubs.o 00:08:02.290 CC lib/nvme/nvme_auth.o 00:08:02.548 CC lib/accel/accel_rpc.o 00:08:02.548 CC lib/accel/accel_sw.o 00:08:02.807 CC lib/nvme/nvme_cuse.o 00:08:02.807 CC lib/blob/blobstore.o 00:08:02.807 CC lib/init/json_config.o 00:08:02.807 CC lib/init/subsystem.o 00:08:03.066 LIB libspdk_accel.a 00:08:03.066 CC lib/init/subsystem_rpc.o 00:08:03.066 CC lib/init/rpc.o 00:08:03.066 SO libspdk_accel.so.16.0 00:08:03.066 SYMLINK libspdk_accel.so 00:08:03.066 CC lib/blob/request.o 00:08:03.066 LIB libspdk_init.a 00:08:03.066 CC lib/nvme/nvme_rdma.o 00:08:03.325 SO libspdk_init.so.6.0 00:08:03.325 CC lib/virtio/virtio.o 00:08:03.325 CC lib/fsdev/fsdev.o 00:08:03.325 SYMLINK libspdk_init.so 00:08:03.325 CC lib/bdev/bdev.o 00:08:03.325 CC lib/bdev/bdev_rpc.o 00:08:03.325 CC lib/bdev/bdev_zone.o 00:08:03.583 CC lib/bdev/part.o 00:08:03.583 CC lib/bdev/scsi_nvme.o 00:08:03.583 CC lib/virtio/virtio_vhost_user.o 00:08:03.583 CC lib/virtio/virtio_vfio_user.o 00:08:03.583 CC lib/virtio/virtio_pci.o 00:08:03.583 CC lib/blob/zeroes.o 00:08:03.843 CC lib/fsdev/fsdev_io.o 00:08:03.843 CC lib/fsdev/fsdev_rpc.o 00:08:03.843 CC lib/blob/blob_bs_dev.o 00:08:03.843 CC lib/event/app.o 00:08:03.843 CC lib/event/reactor.o 00:08:03.843 LIB libspdk_virtio.a 00:08:03.843 CC lib/event/log_rpc.o 00:08:04.101 CC lib/event/app_rpc.o 00:08:04.101 SO libspdk_virtio.so.7.0 00:08:04.101 SYMLINK libspdk_virtio.so 00:08:04.101 CC lib/event/scheduler_static.o 00:08:04.101 LIB libspdk_fsdev.a 00:08:04.101 SO libspdk_fsdev.so.1.0 00:08:04.360 SYMLINK libspdk_fsdev.so 00:08:04.360 LIB libspdk_event.a 00:08:04.360 SO libspdk_event.so.14.0 00:08:04.360 SYMLINK libspdk_event.so 00:08:04.360 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:04.618 LIB libspdk_nvme.a 00:08:04.877 SO libspdk_nvme.so.14.0 00:08:05.136 SYMLINK libspdk_nvme.so 00:08:05.136 LIB libspdk_fuse_dispatcher.a 00:08:05.136 SO libspdk_fuse_dispatcher.so.1.0 00:08:05.136 SYMLINK libspdk_fuse_dispatcher.so 00:08:06.106 LIB libspdk_bdev.a 00:08:06.106 LIB libspdk_blob.a 00:08:06.106 SO libspdk_bdev.so.16.0 00:08:06.106 SO libspdk_blob.so.11.0 00:08:06.106 SYMLINK libspdk_bdev.so 00:08:06.370 SYMLINK libspdk_blob.so 00:08:06.370 CC lib/ublk/ublk.o 00:08:06.370 CC lib/ublk/ublk_rpc.o 00:08:06.370 CC lib/scsi/dev.o 00:08:06.370 CC lib/scsi/lun.o 00:08:06.370 CC lib/scsi/port.o 00:08:06.370 CC lib/ftl/ftl_core.o 00:08:06.370 CC lib/nvmf/ctrlr.o 00:08:06.370 CC lib/nbd/nbd.o 00:08:06.370 CC lib/lvol/lvol.o 00:08:06.370 CC lib/blobfs/blobfs.o 00:08:06.629 CC lib/blobfs/tree.o 00:08:06.629 CC lib/nbd/nbd_rpc.o 00:08:06.629 CC lib/scsi/scsi.o 00:08:06.629 CC lib/scsi/scsi_bdev.o 00:08:06.887 CC lib/scsi/scsi_pr.o 00:08:06.887 CC lib/ftl/ftl_init.o 00:08:06.887 CC lib/scsi/scsi_rpc.o 00:08:06.887 CC lib/scsi/task.o 00:08:06.887 LIB libspdk_nbd.a 00:08:06.887 SO libspdk_nbd.so.7.0 00:08:06.887 CC lib/ftl/ftl_layout.o 00:08:06.887 SYMLINK libspdk_nbd.so 00:08:06.887 CC lib/nvmf/ctrlr_discovery.o 00:08:07.146 CC lib/nvmf/ctrlr_bdev.o 00:08:07.146 CC lib/ftl/ftl_debug.o 00:08:07.146 LIB libspdk_ublk.a 00:08:07.146 SO libspdk_ublk.so.3.0 00:08:07.146 CC lib/nvmf/subsystem.o 00:08:07.146 SYMLINK libspdk_ublk.so 00:08:07.146 CC lib/ftl/ftl_io.o 00:08:07.146 LIB libspdk_scsi.a 00:08:07.405 CC lib/ftl/ftl_sb.o 00:08:07.405 CC lib/ftl/ftl_l2p.o 00:08:07.405 LIB libspdk_blobfs.a 00:08:07.405 SO libspdk_scsi.so.9.0 00:08:07.405 SO libspdk_blobfs.so.10.0 00:08:07.405 SYMLINK libspdk_blobfs.so 00:08:07.405 SYMLINK libspdk_scsi.so 00:08:07.405 LIB libspdk_lvol.a 00:08:07.405 CC lib/nvmf/nvmf.o 00:08:07.405 CC lib/ftl/ftl_l2p_flat.o 00:08:07.405 SO libspdk_lvol.so.10.0 00:08:07.405 CC lib/nvmf/nvmf_rpc.o 00:08:07.405 CC lib/nvmf/transport.o 00:08:07.663 CC lib/ftl/ftl_nv_cache.o 00:08:07.663 SYMLINK libspdk_lvol.so 00:08:07.663 CC lib/iscsi/conn.o 00:08:07.663 CC lib/iscsi/init_grp.o 00:08:07.663 CC lib/vhost/vhost.o 00:08:07.921 CC lib/nvmf/tcp.o 00:08:07.921 CC lib/iscsi/iscsi.o 00:08:08.180 CC lib/iscsi/param.o 00:08:08.180 CC lib/iscsi/portal_grp.o 00:08:08.180 CC lib/vhost/vhost_rpc.o 00:08:08.438 CC lib/vhost/vhost_scsi.o 00:08:08.438 CC lib/vhost/vhost_blk.o 00:08:08.438 CC lib/vhost/rte_vhost_user.o 00:08:08.438 CC lib/ftl/ftl_band.o 00:08:08.438 CC lib/nvmf/stubs.o 00:08:08.438 CC lib/iscsi/tgt_node.o 00:08:08.438 CC lib/nvmf/mdns_server.o 00:08:09.005 CC lib/ftl/ftl_band_ops.o 00:08:09.005 CC lib/ftl/ftl_writer.o 00:08:09.005 CC lib/nvmf/rdma.o 00:08:09.005 CC lib/iscsi/iscsi_subsystem.o 00:08:09.005 CC lib/iscsi/iscsi_rpc.o 00:08:09.263 CC lib/iscsi/task.o 00:08:09.263 CC lib/ftl/ftl_rq.o 00:08:09.263 CC lib/nvmf/auth.o 00:08:09.263 CC lib/ftl/ftl_reloc.o 00:08:09.521 CC lib/ftl/ftl_l2p_cache.o 00:08:09.521 CC lib/ftl/ftl_p2l.o 00:08:09.521 CC lib/ftl/ftl_p2l_log.o 00:08:09.521 LIB libspdk_iscsi.a 00:08:09.521 CC lib/ftl/mngt/ftl_mngt.o 00:08:09.521 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:09.521 SO libspdk_iscsi.so.8.0 00:08:09.521 LIB libspdk_vhost.a 00:08:09.780 SO libspdk_vhost.so.8.0 00:08:09.780 SYMLINK libspdk_iscsi.so 00:08:09.780 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:09.780 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:09.780 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:09.780 SYMLINK libspdk_vhost.so 00:08:09.780 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:09.780 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:09.780 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:09.780 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:10.039 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:10.039 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:10.039 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:10.039 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:10.039 CC lib/ftl/utils/ftl_conf.o 00:08:10.039 CC lib/ftl/utils/ftl_md.o 00:08:10.039 CC lib/ftl/utils/ftl_mempool.o 00:08:10.039 CC lib/ftl/utils/ftl_bitmap.o 00:08:10.297 CC lib/ftl/utils/ftl_property.o 00:08:10.297 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:10.297 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:10.297 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:10.297 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:10.297 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:10.297 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:10.297 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:10.297 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:10.556 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:10.556 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:10.556 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:10.556 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:10.556 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:10.556 CC lib/ftl/base/ftl_base_dev.o 00:08:10.556 CC lib/ftl/base/ftl_base_bdev.o 00:08:10.556 CC lib/ftl/ftl_trace.o 00:08:10.814 LIB libspdk_ftl.a 00:08:11.073 LIB libspdk_nvmf.a 00:08:11.073 SO libspdk_nvmf.so.19.0 00:08:11.073 SO libspdk_ftl.so.9.0 00:08:11.332 SYMLINK libspdk_nvmf.so 00:08:11.332 SYMLINK libspdk_ftl.so 00:08:11.898 CC module/env_dpdk/env_dpdk_rpc.o 00:08:11.898 CC module/accel/iaa/accel_iaa.o 00:08:11.898 CC module/accel/error/accel_error.o 00:08:11.898 CC module/accel/ioat/accel_ioat.o 00:08:11.898 CC module/blob/bdev/blob_bdev.o 00:08:11.898 CC module/keyring/file/keyring.o 00:08:11.898 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:11.898 CC module/accel/dsa/accel_dsa.o 00:08:11.898 CC module/fsdev/aio/fsdev_aio.o 00:08:11.898 CC module/sock/posix/posix.o 00:08:11.898 LIB libspdk_env_dpdk_rpc.a 00:08:11.898 SO libspdk_env_dpdk_rpc.so.6.0 00:08:11.898 SYMLINK libspdk_env_dpdk_rpc.so 00:08:11.898 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:11.898 CC module/keyring/file/keyring_rpc.o 00:08:11.898 CC module/accel/error/accel_error_rpc.o 00:08:11.898 CC module/accel/ioat/accel_ioat_rpc.o 00:08:12.157 LIB libspdk_scheduler_dynamic.a 00:08:12.157 CC module/accel/iaa/accel_iaa_rpc.o 00:08:12.157 SO libspdk_scheduler_dynamic.so.4.0 00:08:12.157 LIB libspdk_keyring_file.a 00:08:12.157 SYMLINK libspdk_scheduler_dynamic.so 00:08:12.157 LIB libspdk_blob_bdev.a 00:08:12.157 CC module/accel/dsa/accel_dsa_rpc.o 00:08:12.157 SO libspdk_keyring_file.so.2.0 00:08:12.157 SO libspdk_blob_bdev.so.11.0 00:08:12.157 CC module/fsdev/aio/linux_aio_mgr.o 00:08:12.157 LIB libspdk_accel_error.a 00:08:12.157 LIB libspdk_accel_ioat.a 00:08:12.157 SO libspdk_accel_error.so.2.0 00:08:12.157 LIB libspdk_accel_iaa.a 00:08:12.157 SO libspdk_accel_ioat.so.6.0 00:08:12.157 SYMLINK libspdk_blob_bdev.so 00:08:12.157 SYMLINK libspdk_keyring_file.so 00:08:12.157 SO libspdk_accel_iaa.so.3.0 00:08:12.157 SYMLINK libspdk_accel_ioat.so 00:08:12.157 SYMLINK libspdk_accel_error.so 00:08:12.416 LIB libspdk_accel_dsa.a 00:08:12.416 SYMLINK libspdk_accel_iaa.so 00:08:12.416 SO libspdk_accel_dsa.so.5.0 00:08:12.416 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:12.416 SYMLINK libspdk_accel_dsa.so 00:08:12.416 CC module/keyring/linux/keyring.o 00:08:12.416 CC module/keyring/linux/keyring_rpc.o 00:08:12.416 CC module/scheduler/gscheduler/gscheduler.o 00:08:12.416 CC module/sock/uring/uring.o 00:08:12.416 LIB libspdk_fsdev_aio.a 00:08:12.416 LIB libspdk_scheduler_dpdk_governor.a 00:08:12.416 SO libspdk_fsdev_aio.so.1.0 00:08:12.674 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:12.674 CC module/bdev/error/vbdev_error.o 00:08:12.674 CC module/bdev/delay/vbdev_delay.o 00:08:12.674 LIB libspdk_keyring_linux.a 00:08:12.674 CC module/blobfs/bdev/blobfs_bdev.o 00:08:12.674 LIB libspdk_scheduler_gscheduler.a 00:08:12.674 SO libspdk_keyring_linux.so.1.0 00:08:12.674 SYMLINK libspdk_fsdev_aio.so 00:08:12.674 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:12.674 CC module/bdev/error/vbdev_error_rpc.o 00:08:12.674 LIB libspdk_sock_posix.a 00:08:12.674 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:12.674 SO libspdk_scheduler_gscheduler.so.4.0 00:08:12.674 SO libspdk_sock_posix.so.6.0 00:08:12.674 SYMLINK libspdk_keyring_linux.so 00:08:12.674 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:12.674 SYMLINK libspdk_scheduler_gscheduler.so 00:08:12.674 SYMLINK libspdk_sock_posix.so 00:08:12.674 CC module/bdev/gpt/gpt.o 00:08:12.933 LIB libspdk_bdev_error.a 00:08:12.933 CC module/bdev/lvol/vbdev_lvol.o 00:08:12.933 LIB libspdk_blobfs_bdev.a 00:08:12.933 SO libspdk_bdev_error.so.6.0 00:08:12.933 CC module/bdev/malloc/bdev_malloc.o 00:08:12.933 SO libspdk_blobfs_bdev.so.6.0 00:08:12.933 CC module/bdev/null/bdev_null.o 00:08:12.933 LIB libspdk_bdev_delay.a 00:08:12.933 SYMLINK libspdk_bdev_error.so 00:08:12.933 CC module/bdev/nvme/bdev_nvme.o 00:08:12.933 CC module/bdev/gpt/vbdev_gpt.o 00:08:12.933 SO libspdk_bdev_delay.so.6.0 00:08:12.933 CC module/bdev/passthru/vbdev_passthru.o 00:08:12.933 SYMLINK libspdk_blobfs_bdev.so 00:08:12.933 SYMLINK libspdk_bdev_delay.so 00:08:12.933 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:13.191 CC module/bdev/raid/bdev_raid.o 00:08:13.191 CC module/bdev/split/vbdev_split.o 00:08:13.191 LIB libspdk_sock_uring.a 00:08:13.191 SO libspdk_sock_uring.so.5.0 00:08:13.191 CC module/bdev/null/bdev_null_rpc.o 00:08:13.191 SYMLINK libspdk_sock_uring.so 00:08:13.191 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:13.191 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:13.191 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:13.450 LIB libspdk_bdev_gpt.a 00:08:13.450 SO libspdk_bdev_gpt.so.6.0 00:08:13.450 LIB libspdk_bdev_null.a 00:08:13.450 CC module/bdev/raid/bdev_raid_rpc.o 00:08:13.450 CC module/bdev/split/vbdev_split_rpc.o 00:08:13.450 SO libspdk_bdev_null.so.6.0 00:08:13.450 LIB libspdk_bdev_lvol.a 00:08:13.450 SYMLINK libspdk_bdev_gpt.so 00:08:13.450 CC module/bdev/nvme/nvme_rpc.o 00:08:13.450 LIB libspdk_bdev_malloc.a 00:08:13.450 SO libspdk_bdev_lvol.so.6.0 00:08:13.450 LIB libspdk_bdev_passthru.a 00:08:13.450 SYMLINK libspdk_bdev_null.so 00:08:13.450 SO libspdk_bdev_malloc.so.6.0 00:08:13.450 SO libspdk_bdev_passthru.so.6.0 00:08:13.450 SYMLINK libspdk_bdev_lvol.so 00:08:13.450 SYMLINK libspdk_bdev_malloc.so 00:08:13.709 SYMLINK libspdk_bdev_passthru.so 00:08:13.709 LIB libspdk_bdev_split.a 00:08:13.709 SO libspdk_bdev_split.so.6.0 00:08:13.709 CC module/bdev/raid/bdev_raid_sb.o 00:08:13.709 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:13.709 CC module/bdev/uring/bdev_uring.o 00:08:13.709 SYMLINK libspdk_bdev_split.so 00:08:13.709 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:13.709 CC module/bdev/aio/bdev_aio.o 00:08:13.709 CC module/bdev/nvme/bdev_mdns_client.o 00:08:13.709 CC module/bdev/ftl/bdev_ftl.o 00:08:13.968 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:13.968 CC module/bdev/uring/bdev_uring_rpc.o 00:08:13.968 CC module/bdev/nvme/vbdev_opal.o 00:08:13.968 LIB libspdk_bdev_zone_block.a 00:08:13.968 SO libspdk_bdev_zone_block.so.6.0 00:08:13.968 CC module/bdev/iscsi/bdev_iscsi.o 00:08:13.968 CC module/bdev/aio/bdev_aio_rpc.o 00:08:13.968 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:13.968 SYMLINK libspdk_bdev_zone_block.so 00:08:14.234 CC module/bdev/raid/raid0.o 00:08:14.234 LIB libspdk_bdev_ftl.a 00:08:14.234 LIB libspdk_bdev_uring.a 00:08:14.234 SO libspdk_bdev_uring.so.6.0 00:08:14.234 SO libspdk_bdev_ftl.so.6.0 00:08:14.234 CC module/bdev/raid/raid1.o 00:08:14.234 SYMLINK libspdk_bdev_ftl.so 00:08:14.234 SYMLINK libspdk_bdev_uring.so 00:08:14.234 CC module/bdev/raid/concat.o 00:08:14.234 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:14.234 LIB libspdk_bdev_aio.a 00:08:14.234 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:14.234 SO libspdk_bdev_aio.so.6.0 00:08:14.234 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:14.234 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:14.234 SYMLINK libspdk_bdev_aio.so 00:08:14.492 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:14.492 LIB libspdk_bdev_iscsi.a 00:08:14.492 SO libspdk_bdev_iscsi.so.6.0 00:08:14.492 LIB libspdk_bdev_raid.a 00:08:14.492 SYMLINK libspdk_bdev_iscsi.so 00:08:14.492 SO libspdk_bdev_raid.so.6.0 00:08:14.751 SYMLINK libspdk_bdev_raid.so 00:08:14.751 LIB libspdk_bdev_virtio.a 00:08:15.010 SO libspdk_bdev_virtio.so.6.0 00:08:15.010 SYMLINK libspdk_bdev_virtio.so 00:08:15.269 LIB libspdk_bdev_nvme.a 00:08:15.269 SO libspdk_bdev_nvme.so.7.0 00:08:15.527 SYMLINK libspdk_bdev_nvme.so 00:08:16.093 CC module/event/subsystems/fsdev/fsdev.o 00:08:16.093 CC module/event/subsystems/iobuf/iobuf.o 00:08:16.093 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:16.093 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:16.093 CC module/event/subsystems/keyring/keyring.o 00:08:16.093 CC module/event/subsystems/vmd/vmd.o 00:08:16.093 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:16.094 CC module/event/subsystems/scheduler/scheduler.o 00:08:16.094 CC module/event/subsystems/sock/sock.o 00:08:16.094 LIB libspdk_event_keyring.a 00:08:16.094 LIB libspdk_event_iobuf.a 00:08:16.094 LIB libspdk_event_sock.a 00:08:16.094 LIB libspdk_event_vhost_blk.a 00:08:16.094 LIB libspdk_event_fsdev.a 00:08:16.094 LIB libspdk_event_scheduler.a 00:08:16.094 LIB libspdk_event_vmd.a 00:08:16.094 SO libspdk_event_vhost_blk.so.3.0 00:08:16.094 SO libspdk_event_keyring.so.1.0 00:08:16.094 SO libspdk_event_sock.so.5.0 00:08:16.094 SO libspdk_event_fsdev.so.1.0 00:08:16.094 SO libspdk_event_iobuf.so.3.0 00:08:16.094 SO libspdk_event_scheduler.so.4.0 00:08:16.094 SO libspdk_event_vmd.so.6.0 00:08:16.094 SYMLINK libspdk_event_vhost_blk.so 00:08:16.094 SYMLINK libspdk_event_sock.so 00:08:16.094 SYMLINK libspdk_event_fsdev.so 00:08:16.094 SYMLINK libspdk_event_keyring.so 00:08:16.094 SYMLINK libspdk_event_scheduler.so 00:08:16.094 SYMLINK libspdk_event_iobuf.so 00:08:16.094 SYMLINK libspdk_event_vmd.so 00:08:16.350 CC module/event/subsystems/accel/accel.o 00:08:16.608 LIB libspdk_event_accel.a 00:08:16.608 SO libspdk_event_accel.so.6.0 00:08:16.608 SYMLINK libspdk_event_accel.so 00:08:16.867 CC module/event/subsystems/bdev/bdev.o 00:08:17.126 LIB libspdk_event_bdev.a 00:08:17.126 SO libspdk_event_bdev.so.6.0 00:08:17.385 SYMLINK libspdk_event_bdev.so 00:08:17.385 CC module/event/subsystems/scsi/scsi.o 00:08:17.385 CC module/event/subsystems/ublk/ublk.o 00:08:17.385 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:17.385 CC module/event/subsystems/nbd/nbd.o 00:08:17.386 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:17.644 LIB libspdk_event_ublk.a 00:08:17.644 SO libspdk_event_ublk.so.3.0 00:08:17.644 LIB libspdk_event_nbd.a 00:08:17.644 LIB libspdk_event_scsi.a 00:08:17.644 SO libspdk_event_nbd.so.6.0 00:08:17.644 SO libspdk_event_scsi.so.6.0 00:08:17.644 SYMLINK libspdk_event_ublk.so 00:08:17.644 LIB libspdk_event_nvmf.a 00:08:17.644 SYMLINK libspdk_event_nbd.so 00:08:17.644 SYMLINK libspdk_event_scsi.so 00:08:17.902 SO libspdk_event_nvmf.so.6.0 00:08:17.902 SYMLINK libspdk_event_nvmf.so 00:08:17.902 CC module/event/subsystems/iscsi/iscsi.o 00:08:17.902 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:18.160 LIB libspdk_event_vhost_scsi.a 00:08:18.160 LIB libspdk_event_iscsi.a 00:08:18.160 SO libspdk_event_vhost_scsi.so.3.0 00:08:18.160 SO libspdk_event_iscsi.so.6.0 00:08:18.160 SYMLINK libspdk_event_vhost_scsi.so 00:08:18.160 SYMLINK libspdk_event_iscsi.so 00:08:18.419 SO libspdk.so.6.0 00:08:18.419 SYMLINK libspdk.so 00:08:18.678 TEST_HEADER include/spdk/accel.h 00:08:18.678 TEST_HEADER include/spdk/accel_module.h 00:08:18.678 TEST_HEADER include/spdk/assert.h 00:08:18.678 TEST_HEADER include/spdk/barrier.h 00:08:18.678 CXX app/trace/trace.o 00:08:18.678 TEST_HEADER include/spdk/base64.h 00:08:18.678 CC app/trace_record/trace_record.o 00:08:18.678 TEST_HEADER include/spdk/bdev.h 00:08:18.678 TEST_HEADER include/spdk/bdev_module.h 00:08:18.678 TEST_HEADER include/spdk/bdev_zone.h 00:08:18.678 TEST_HEADER include/spdk/bit_array.h 00:08:18.678 TEST_HEADER include/spdk/bit_pool.h 00:08:18.678 TEST_HEADER include/spdk/blob_bdev.h 00:08:18.678 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:18.678 TEST_HEADER include/spdk/blobfs.h 00:08:18.678 TEST_HEADER include/spdk/blob.h 00:08:18.678 TEST_HEADER include/spdk/conf.h 00:08:18.678 TEST_HEADER include/spdk/config.h 00:08:18.678 TEST_HEADER include/spdk/cpuset.h 00:08:18.678 TEST_HEADER include/spdk/crc16.h 00:08:18.678 TEST_HEADER include/spdk/crc32.h 00:08:18.678 TEST_HEADER include/spdk/crc64.h 00:08:18.678 TEST_HEADER include/spdk/dif.h 00:08:18.678 CC app/iscsi_tgt/iscsi_tgt.o 00:08:18.678 TEST_HEADER include/spdk/dma.h 00:08:18.678 TEST_HEADER include/spdk/endian.h 00:08:18.678 TEST_HEADER include/spdk/env_dpdk.h 00:08:18.678 TEST_HEADER include/spdk/env.h 00:08:18.678 TEST_HEADER include/spdk/event.h 00:08:18.678 TEST_HEADER include/spdk/fd_group.h 00:08:18.678 CC app/nvmf_tgt/nvmf_main.o 00:08:18.678 TEST_HEADER include/spdk/fd.h 00:08:18.678 TEST_HEADER include/spdk/file.h 00:08:18.678 TEST_HEADER include/spdk/fsdev.h 00:08:18.678 TEST_HEADER include/spdk/fsdev_module.h 00:08:18.678 CC app/spdk_tgt/spdk_tgt.o 00:08:18.678 TEST_HEADER include/spdk/ftl.h 00:08:18.678 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:18.678 TEST_HEADER include/spdk/gpt_spec.h 00:08:18.678 TEST_HEADER include/spdk/hexlify.h 00:08:18.678 TEST_HEADER include/spdk/histogram_data.h 00:08:18.678 TEST_HEADER include/spdk/idxd.h 00:08:18.678 TEST_HEADER include/spdk/idxd_spec.h 00:08:18.678 TEST_HEADER include/spdk/init.h 00:08:18.678 TEST_HEADER include/spdk/ioat.h 00:08:18.678 TEST_HEADER include/spdk/ioat_spec.h 00:08:18.678 TEST_HEADER include/spdk/iscsi_spec.h 00:08:18.678 TEST_HEADER include/spdk/json.h 00:08:18.678 CC examples/util/zipf/zipf.o 00:08:18.937 CC test/thread/poller_perf/poller_perf.o 00:08:18.937 TEST_HEADER include/spdk/jsonrpc.h 00:08:18.937 TEST_HEADER include/spdk/keyring.h 00:08:18.937 TEST_HEADER include/spdk/keyring_module.h 00:08:18.937 TEST_HEADER include/spdk/likely.h 00:08:18.937 TEST_HEADER include/spdk/log.h 00:08:18.937 TEST_HEADER include/spdk/lvol.h 00:08:18.937 TEST_HEADER include/spdk/md5.h 00:08:18.937 TEST_HEADER include/spdk/memory.h 00:08:18.937 CC test/dma/test_dma/test_dma.o 00:08:18.937 TEST_HEADER include/spdk/mmio.h 00:08:18.937 TEST_HEADER include/spdk/nbd.h 00:08:18.937 TEST_HEADER include/spdk/net.h 00:08:18.937 CC test/app/bdev_svc/bdev_svc.o 00:08:18.937 TEST_HEADER include/spdk/notify.h 00:08:18.937 TEST_HEADER include/spdk/nvme.h 00:08:18.937 TEST_HEADER include/spdk/nvme_intel.h 00:08:18.937 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:18.937 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:18.937 TEST_HEADER include/spdk/nvme_spec.h 00:08:18.937 TEST_HEADER include/spdk/nvme_zns.h 00:08:18.937 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:18.937 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:18.937 TEST_HEADER include/spdk/nvmf.h 00:08:18.937 TEST_HEADER include/spdk/nvmf_spec.h 00:08:18.937 TEST_HEADER include/spdk/nvmf_transport.h 00:08:18.937 TEST_HEADER include/spdk/opal.h 00:08:18.937 TEST_HEADER include/spdk/opal_spec.h 00:08:18.937 TEST_HEADER include/spdk/pci_ids.h 00:08:18.937 TEST_HEADER include/spdk/pipe.h 00:08:18.937 TEST_HEADER include/spdk/queue.h 00:08:18.937 TEST_HEADER include/spdk/reduce.h 00:08:18.937 TEST_HEADER include/spdk/rpc.h 00:08:18.937 TEST_HEADER include/spdk/scheduler.h 00:08:18.937 TEST_HEADER include/spdk/scsi.h 00:08:18.937 TEST_HEADER include/spdk/scsi_spec.h 00:08:18.937 TEST_HEADER include/spdk/sock.h 00:08:18.937 TEST_HEADER include/spdk/stdinc.h 00:08:18.937 TEST_HEADER include/spdk/string.h 00:08:18.937 TEST_HEADER include/spdk/thread.h 00:08:18.937 TEST_HEADER include/spdk/trace.h 00:08:18.937 TEST_HEADER include/spdk/trace_parser.h 00:08:18.937 TEST_HEADER include/spdk/tree.h 00:08:18.937 TEST_HEADER include/spdk/ublk.h 00:08:18.937 TEST_HEADER include/spdk/util.h 00:08:18.937 TEST_HEADER include/spdk/uuid.h 00:08:18.937 LINK poller_perf 00:08:18.937 TEST_HEADER include/spdk/version.h 00:08:18.937 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:18.937 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:18.937 TEST_HEADER include/spdk/vhost.h 00:08:18.937 TEST_HEADER include/spdk/vmd.h 00:08:18.937 LINK spdk_trace_record 00:08:18.937 LINK zipf 00:08:18.937 TEST_HEADER include/spdk/xor.h 00:08:18.937 LINK nvmf_tgt 00:08:18.937 LINK iscsi_tgt 00:08:18.937 TEST_HEADER include/spdk/zipf.h 00:08:18.937 CXX test/cpp_headers/accel.o 00:08:19.195 LINK spdk_tgt 00:08:19.195 LINK bdev_svc 00:08:19.195 LINK spdk_trace 00:08:19.195 CXX test/cpp_headers/accel_module.o 00:08:19.195 CC app/spdk_nvme_perf/perf.o 00:08:19.195 CC app/spdk_lspci/spdk_lspci.o 00:08:19.454 CC app/spdk_nvme_identify/identify.o 00:08:19.454 CC examples/ioat/perf/perf.o 00:08:19.454 CC app/spdk_nvme_discover/discovery_aer.o 00:08:19.454 CXX test/cpp_headers/assert.o 00:08:19.454 LINK test_dma 00:08:19.454 LINK spdk_lspci 00:08:19.454 CC app/spdk_top/spdk_top.o 00:08:19.454 CC examples/ioat/verify/verify.o 00:08:19.454 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:19.712 LINK spdk_nvme_discover 00:08:19.712 CXX test/cpp_headers/barrier.o 00:08:19.712 LINK ioat_perf 00:08:19.712 CC app/spdk_dd/spdk_dd.o 00:08:19.712 LINK verify 00:08:19.712 CXX test/cpp_headers/base64.o 00:08:19.970 CC app/vhost/vhost.o 00:08:19.970 CC app/fio/nvme/fio_plugin.o 00:08:19.970 CC test/env/mem_callbacks/mem_callbacks.o 00:08:19.970 LINK nvme_fuzz 00:08:19.970 CXX test/cpp_headers/bdev.o 00:08:20.229 LINK vhost 00:08:20.229 CC examples/vmd/lsvmd/lsvmd.o 00:08:20.229 LINK spdk_nvme_perf 00:08:20.229 CXX test/cpp_headers/bdev_module.o 00:08:20.229 LINK spdk_nvme_identify 00:08:20.229 LINK spdk_dd 00:08:20.229 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:20.229 LINK lsvmd 00:08:20.487 LINK spdk_top 00:08:20.487 CXX test/cpp_headers/bdev_zone.o 00:08:20.487 CC app/fio/bdev/fio_plugin.o 00:08:20.487 LINK spdk_nvme 00:08:20.487 CC examples/idxd/perf/perf.o 00:08:20.487 CC examples/vmd/led/led.o 00:08:20.487 CC test/env/vtophys/vtophys.o 00:08:20.487 LINK mem_callbacks 00:08:20.487 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:20.746 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:20.746 CXX test/cpp_headers/bit_array.o 00:08:20.746 CC test/env/memory/memory_ut.o 00:08:20.746 LINK led 00:08:20.746 LINK vtophys 00:08:20.746 CC test/env/pci/pci_ut.o 00:08:20.746 LINK interrupt_tgt 00:08:20.746 CXX test/cpp_headers/bit_pool.o 00:08:20.746 LINK env_dpdk_post_init 00:08:21.003 LINK idxd_perf 00:08:21.003 CXX test/cpp_headers/blob_bdev.o 00:08:21.003 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:21.004 LINK spdk_bdev 00:08:21.004 CC test/event/event_perf/event_perf.o 00:08:21.261 CC test/event/reactor/reactor.o 00:08:21.261 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:21.261 LINK pci_ut 00:08:21.261 CC test/event/reactor_perf/reactor_perf.o 00:08:21.261 CC examples/thread/thread/thread_ex.o 00:08:21.261 CXX test/cpp_headers/blobfs_bdev.o 00:08:21.261 LINK event_perf 00:08:21.261 LINK reactor 00:08:21.261 CC test/event/app_repeat/app_repeat.o 00:08:21.519 LINK reactor_perf 00:08:21.519 CXX test/cpp_headers/blobfs.o 00:08:21.519 LINK thread 00:08:21.519 LINK app_repeat 00:08:21.519 CC test/event/scheduler/scheduler.o 00:08:21.519 CC test/rpc_client/rpc_client_test.o 00:08:21.777 CC test/nvme/aer/aer.o 00:08:21.777 CC examples/sock/hello_world/hello_sock.o 00:08:21.777 CXX test/cpp_headers/blob.o 00:08:21.777 LINK vhost_fuzz 00:08:21.777 CC test/nvme/reset/reset.o 00:08:21.777 LINK rpc_client_test 00:08:21.777 LINK scheduler 00:08:21.777 CXX test/cpp_headers/conf.o 00:08:22.035 CC examples/accel/perf/accel_perf.o 00:08:22.035 LINK hello_sock 00:08:22.035 LINK iscsi_fuzz 00:08:22.035 LINK aer 00:08:22.035 CC test/app/histogram_perf/histogram_perf.o 00:08:22.035 LINK memory_ut 00:08:22.035 CXX test/cpp_headers/config.o 00:08:22.035 CXX test/cpp_headers/cpuset.o 00:08:22.035 LINK reset 00:08:22.035 LINK histogram_perf 00:08:22.293 CC test/app/jsoncat/jsoncat.o 00:08:22.293 CXX test/cpp_headers/crc16.o 00:08:22.293 CC test/accel/dif/dif.o 00:08:22.293 CC test/blobfs/mkfs/mkfs.o 00:08:22.293 LINK jsoncat 00:08:22.293 CC test/nvme/sgl/sgl.o 00:08:22.293 CC test/nvme/e2edp/nvme_dp.o 00:08:22.293 CC examples/blob/hello_world/hello_blob.o 00:08:22.293 CXX test/cpp_headers/crc32.o 00:08:22.293 CC test/nvme/overhead/overhead.o 00:08:22.552 CC test/lvol/esnap/esnap.o 00:08:22.552 LINK accel_perf 00:08:22.552 CXX test/cpp_headers/crc64.o 00:08:22.552 LINK mkfs 00:08:22.552 CC test/app/stub/stub.o 00:08:22.552 CXX test/cpp_headers/dif.o 00:08:22.552 LINK sgl 00:08:22.552 LINK nvme_dp 00:08:22.552 LINK hello_blob 00:08:22.811 LINK overhead 00:08:22.811 LINK stub 00:08:22.811 CXX test/cpp_headers/dma.o 00:08:22.811 CXX test/cpp_headers/endian.o 00:08:22.811 CXX test/cpp_headers/env_dpdk.o 00:08:22.811 CC test/nvme/err_injection/err_injection.o 00:08:23.069 CC test/nvme/startup/startup.o 00:08:23.069 LINK dif 00:08:23.069 CXX test/cpp_headers/env.o 00:08:23.069 CC examples/blob/cli/blobcli.o 00:08:23.069 CC examples/nvme/hello_world/hello_world.o 00:08:23.069 CC examples/nvme/reconnect/reconnect.o 00:08:23.069 LINK err_injection 00:08:23.069 CXX test/cpp_headers/event.o 00:08:23.069 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:23.069 LINK startup 00:08:23.327 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:23.327 LINK hello_world 00:08:23.327 CXX test/cpp_headers/fd_group.o 00:08:23.328 CC test/nvme/reserve/reserve.o 00:08:23.586 CC test/bdev/bdevio/bdevio.o 00:08:23.586 LINK reconnect 00:08:23.586 LINK hello_fsdev 00:08:23.586 CXX test/cpp_headers/fd.o 00:08:23.586 LINK blobcli 00:08:23.586 CC examples/bdev/hello_world/hello_bdev.o 00:08:23.586 CC examples/bdev/bdevperf/bdevperf.o 00:08:23.586 LINK reserve 00:08:23.586 LINK nvme_manage 00:08:23.843 CC examples/nvme/arbitration/arbitration.o 00:08:23.843 CXX test/cpp_headers/file.o 00:08:23.843 CXX test/cpp_headers/fsdev.o 00:08:23.843 LINK hello_bdev 00:08:23.843 LINK bdevio 00:08:23.843 CC test/nvme/simple_copy/simple_copy.o 00:08:23.843 CXX test/cpp_headers/fsdev_module.o 00:08:23.843 CC test/nvme/connect_stress/connect_stress.o 00:08:23.843 CC examples/nvme/hotplug/hotplug.o 00:08:24.101 CC test/nvme/boot_partition/boot_partition.o 00:08:24.101 LINK arbitration 00:08:24.101 CC test/nvme/compliance/nvme_compliance.o 00:08:24.101 LINK connect_stress 00:08:24.101 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:24.101 CXX test/cpp_headers/ftl.o 00:08:24.101 LINK simple_copy 00:08:24.101 LINK boot_partition 00:08:24.404 LINK hotplug 00:08:24.404 CC examples/nvme/abort/abort.o 00:08:24.404 CXX test/cpp_headers/fuse_dispatcher.o 00:08:24.404 CXX test/cpp_headers/gpt_spec.o 00:08:24.404 LINK cmb_copy 00:08:24.404 CXX test/cpp_headers/hexlify.o 00:08:24.404 CC test/nvme/fused_ordering/fused_ordering.o 00:08:24.404 LINK bdevperf 00:08:24.404 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:24.662 CXX test/cpp_headers/histogram_data.o 00:08:24.662 CXX test/cpp_headers/idxd.o 00:08:24.662 CXX test/cpp_headers/idxd_spec.o 00:08:24.662 LINK nvme_compliance 00:08:24.662 CXX test/cpp_headers/init.o 00:08:24.662 LINK fused_ordering 00:08:24.662 LINK pmr_persistence 00:08:24.662 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:24.662 LINK abort 00:08:24.662 CXX test/cpp_headers/ioat.o 00:08:24.662 CXX test/cpp_headers/ioat_spec.o 00:08:24.662 CXX test/cpp_headers/iscsi_spec.o 00:08:24.921 CC test/nvme/fdp/fdp.o 00:08:24.921 CC test/nvme/cuse/cuse.o 00:08:24.921 CXX test/cpp_headers/json.o 00:08:24.921 CXX test/cpp_headers/jsonrpc.o 00:08:24.921 CXX test/cpp_headers/keyring.o 00:08:24.921 CXX test/cpp_headers/keyring_module.o 00:08:24.921 CXX test/cpp_headers/likely.o 00:08:24.921 LINK doorbell_aers 00:08:24.921 CXX test/cpp_headers/log.o 00:08:24.921 CXX test/cpp_headers/lvol.o 00:08:25.181 CXX test/cpp_headers/md5.o 00:08:25.181 CXX test/cpp_headers/memory.o 00:08:25.181 CXX test/cpp_headers/mmio.o 00:08:25.181 CXX test/cpp_headers/nbd.o 00:08:25.181 CC examples/nvmf/nvmf/nvmf.o 00:08:25.181 LINK fdp 00:08:25.181 CXX test/cpp_headers/net.o 00:08:25.181 CXX test/cpp_headers/notify.o 00:08:25.181 CXX test/cpp_headers/nvme.o 00:08:25.181 CXX test/cpp_headers/nvme_intel.o 00:08:25.181 CXX test/cpp_headers/nvme_ocssd.o 00:08:25.439 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:25.439 CXX test/cpp_headers/nvme_spec.o 00:08:25.439 CXX test/cpp_headers/nvme_zns.o 00:08:25.439 CXX test/cpp_headers/nvmf_cmd.o 00:08:25.439 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:25.439 CXX test/cpp_headers/nvmf.o 00:08:25.439 CXX test/cpp_headers/nvmf_spec.o 00:08:25.439 CXX test/cpp_headers/nvmf_transport.o 00:08:25.439 CXX test/cpp_headers/opal.o 00:08:25.439 LINK nvmf 00:08:25.439 CXX test/cpp_headers/opal_spec.o 00:08:25.697 CXX test/cpp_headers/pci_ids.o 00:08:25.697 CXX test/cpp_headers/pipe.o 00:08:25.697 CXX test/cpp_headers/queue.o 00:08:25.697 CXX test/cpp_headers/reduce.o 00:08:25.697 CXX test/cpp_headers/rpc.o 00:08:25.697 CXX test/cpp_headers/scheduler.o 00:08:25.697 CXX test/cpp_headers/scsi.o 00:08:25.697 CXX test/cpp_headers/scsi_spec.o 00:08:25.697 CXX test/cpp_headers/sock.o 00:08:25.697 CXX test/cpp_headers/stdinc.o 00:08:25.697 CXX test/cpp_headers/string.o 00:08:25.956 CXX test/cpp_headers/thread.o 00:08:25.956 CXX test/cpp_headers/trace.o 00:08:25.956 CXX test/cpp_headers/trace_parser.o 00:08:25.956 CXX test/cpp_headers/tree.o 00:08:25.956 CXX test/cpp_headers/ublk.o 00:08:25.956 CXX test/cpp_headers/util.o 00:08:25.956 CXX test/cpp_headers/uuid.o 00:08:25.956 CXX test/cpp_headers/version.o 00:08:25.956 CXX test/cpp_headers/vfio_user_pci.o 00:08:25.956 CXX test/cpp_headers/vfio_user_spec.o 00:08:25.956 CXX test/cpp_headers/vhost.o 00:08:25.956 CXX test/cpp_headers/vmd.o 00:08:25.956 CXX test/cpp_headers/xor.o 00:08:26.213 CXX test/cpp_headers/zipf.o 00:08:26.213 LINK cuse 00:08:28.115 LINK esnap 00:08:28.374 00:08:28.374 real 1m29.202s 00:08:28.374 user 8m25.866s 00:08:28.374 sys 1m32.298s 00:08:28.374 13:13:30 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:08:28.374 13:13:30 make -- common/autotest_common.sh@10 -- $ set +x 00:08:28.374 ************************************ 00:08:28.374 END TEST make 00:08:28.374 ************************************ 00:08:28.374 13:13:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:28.374 13:13:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:28.374 13:13:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:28.374 13:13:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:28.374 13:13:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:28.374 13:13:30 -- pm/common@44 -- $ pid=5289 00:08:28.374 13:13:30 -- pm/common@50 -- $ kill -TERM 5289 00:08:28.374 13:13:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:28.374 13:13:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:28.374 13:13:30 -- pm/common@44 -- $ pid=5290 00:08:28.374 13:13:30 -- pm/common@50 -- $ kill -TERM 5290 00:08:28.374 13:13:30 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:28.374 13:13:30 -- common/autotest_common.sh@1681 -- # lcov --version 00:08:28.374 13:13:30 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:28.633 13:13:30 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:28.633 13:13:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.633 13:13:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.633 13:13:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.633 13:13:30 -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.633 13:13:30 -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.633 13:13:30 -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.633 13:13:30 -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.633 13:13:30 -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.633 13:13:30 -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.633 13:13:30 -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.633 13:13:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.633 13:13:30 -- scripts/common.sh@344 -- # case "$op" in 00:08:28.633 13:13:30 -- scripts/common.sh@345 -- # : 1 00:08:28.633 13:13:30 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.633 13:13:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.633 13:13:30 -- scripts/common.sh@365 -- # decimal 1 00:08:28.633 13:13:30 -- scripts/common.sh@353 -- # local d=1 00:08:28.633 13:13:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.633 13:13:30 -- scripts/common.sh@355 -- # echo 1 00:08:28.633 13:13:30 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.634 13:13:30 -- scripts/common.sh@366 -- # decimal 2 00:08:28.634 13:13:30 -- scripts/common.sh@353 -- # local d=2 00:08:28.634 13:13:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.634 13:13:30 -- scripts/common.sh@355 -- # echo 2 00:08:28.634 13:13:30 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.634 13:13:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.634 13:13:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.634 13:13:30 -- scripts/common.sh@368 -- # return 0 00:08:28.634 13:13:30 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.634 13:13:30 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:28.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.634 --rc genhtml_branch_coverage=1 00:08:28.634 --rc genhtml_function_coverage=1 00:08:28.634 --rc genhtml_legend=1 00:08:28.634 --rc geninfo_all_blocks=1 00:08:28.634 --rc geninfo_unexecuted_blocks=1 00:08:28.634 00:08:28.634 ' 00:08:28.634 13:13:30 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:28.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.634 --rc genhtml_branch_coverage=1 00:08:28.634 --rc genhtml_function_coverage=1 00:08:28.634 --rc genhtml_legend=1 00:08:28.634 --rc geninfo_all_blocks=1 00:08:28.634 --rc geninfo_unexecuted_blocks=1 00:08:28.634 00:08:28.634 ' 00:08:28.634 13:13:30 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:28.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.634 --rc genhtml_branch_coverage=1 00:08:28.634 --rc genhtml_function_coverage=1 00:08:28.634 --rc genhtml_legend=1 00:08:28.634 --rc geninfo_all_blocks=1 00:08:28.634 --rc geninfo_unexecuted_blocks=1 00:08:28.634 00:08:28.634 ' 00:08:28.634 13:13:30 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:28.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.634 --rc genhtml_branch_coverage=1 00:08:28.634 --rc genhtml_function_coverage=1 00:08:28.634 --rc genhtml_legend=1 00:08:28.634 --rc geninfo_all_blocks=1 00:08:28.634 --rc geninfo_unexecuted_blocks=1 00:08:28.634 00:08:28.634 ' 00:08:28.634 13:13:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:28.634 13:13:30 -- nvmf/common.sh@7 -- # uname -s 00:08:28.634 13:13:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.634 13:13:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.634 13:13:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.634 13:13:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.634 13:13:30 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.634 13:13:30 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:28.634 13:13:30 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.634 13:13:30 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:28.634 13:13:30 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:08:28.634 13:13:30 -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:08:28.634 13:13:30 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.634 13:13:30 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:28.634 13:13:30 -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:08:28.634 13:13:30 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.634 13:13:30 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.634 13:13:30 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.634 13:13:30 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.634 13:13:30 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.634 13:13:30 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.634 13:13:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.634 13:13:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.634 13:13:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.634 13:13:30 -- paths/export.sh@5 -- # export PATH 00:08:28.634 13:13:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.634 13:13:30 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:08:28.634 13:13:30 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:28.634 13:13:30 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:28.634 13:13:30 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:28.634 13:13:30 -- nvmf/common.sh@50 -- # : 0 00:08:28.634 13:13:30 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:28.634 13:13:30 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:28.634 13:13:30 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:28.634 13:13:30 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.634 13:13:30 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.634 13:13:30 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:28.634 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:28.634 13:13:30 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:28.634 13:13:30 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:28.634 13:13:30 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:28.634 13:13:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:28.634 13:13:30 -- spdk/autotest.sh@32 -- # uname -s 00:08:28.634 13:13:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:28.634 13:13:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:28.634 13:13:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:28.634 13:13:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:28.634 13:13:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:28.634 13:13:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:28.634 13:13:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:28.634 13:13:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:28.634 13:13:30 -- spdk/autotest.sh@48 -- # udevadm_pid=54372 00:08:28.634 13:13:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:28.634 13:13:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:28.634 13:13:30 -- pm/common@17 -- # local monitor 00:08:28.634 13:13:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:28.634 13:13:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:28.634 13:13:30 -- pm/common@25 -- # sleep 1 00:08:28.634 13:13:30 -- pm/common@21 -- # date +%s 00:08:28.634 13:13:30 -- pm/common@21 -- # date +%s 00:08:28.634 13:13:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727442810 00:08:28.634 13:13:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727442810 00:08:28.634 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727442810_collect-vmstat.pm.log 00:08:28.634 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727442810_collect-cpu-load.pm.log 00:08:30.012 13:13:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:30.012 13:13:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:30.012 13:13:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:30.012 13:13:31 -- common/autotest_common.sh@10 -- # set +x 00:08:30.012 13:13:31 -- spdk/autotest.sh@59 -- # create_test_list 00:08:30.012 13:13:31 -- common/autotest_common.sh@748 -- # xtrace_disable 00:08:30.012 13:13:31 -- common/autotest_common.sh@10 -- # set +x 00:08:30.012 13:13:31 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:30.012 13:13:31 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:30.012 13:13:31 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:30.012 13:13:31 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:30.012 13:13:31 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:30.012 13:13:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:30.012 13:13:31 -- common/autotest_common.sh@1455 -- # uname 00:08:30.012 13:13:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:08:30.012 13:13:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:30.012 13:13:31 -- common/autotest_common.sh@1475 -- # uname 00:08:30.012 13:13:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:08:30.012 13:13:31 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:30.012 13:13:31 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:30.012 lcov: LCOV version 1.15 00:08:30.012 13:13:31 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:48.095 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:48.095 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:09:06.178 13:14:05 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:06.178 13:14:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:06.178 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:09:06.178 13:14:05 -- spdk/autotest.sh@78 -- # rm -f 00:09:06.178 13:14:05 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:06.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:06.178 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:09:06.178 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:09:06.178 13:14:06 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:06.178 13:14:06 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:06.178 13:14:06 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:06.178 13:14:06 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:06.178 13:14:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:06.178 13:14:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:06.178 13:14:06 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:06.178 13:14:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:06.178 13:14:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:06.178 13:14:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:06.178 13:14:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:09:06.178 13:14:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:09:06.178 13:14:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:06.178 13:14:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:06.178 13:14:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:06.178 13:14:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:09:06.178 13:14:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:09:06.178 13:14:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:06.178 13:14:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:06.178 13:14:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:06.178 13:14:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:09:06.178 13:14:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:09:06.178 13:14:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:06.178 13:14:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:06.178 13:14:06 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:06.178 13:14:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:06.178 13:14:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:06.178 13:14:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:06.178 13:14:06 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:06.178 13:14:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:06.178 No valid GPT data, bailing 00:09:06.178 13:14:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:06.178 13:14:06 -- scripts/common.sh@394 -- # pt= 00:09:06.178 13:14:06 -- scripts/common.sh@395 -- # return 1 00:09:06.178 13:14:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:06.178 1+0 records in 00:09:06.178 1+0 records out 00:09:06.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433306 s, 242 MB/s 00:09:06.178 13:14:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:06.178 13:14:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:06.178 13:14:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:09:06.178 13:14:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:09:06.178 13:14:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:09:06.178 No valid GPT data, bailing 00:09:06.178 13:14:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:09:06.178 13:14:06 -- scripts/common.sh@394 -- # pt= 00:09:06.178 13:14:06 -- scripts/common.sh@395 -- # return 1 00:09:06.178 13:14:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:09:06.178 1+0 records in 00:09:06.178 1+0 records out 00:09:06.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439792 s, 238 MB/s 00:09:06.178 13:14:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:06.178 13:14:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:06.178 13:14:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:09:06.178 13:14:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:09:06.178 13:14:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:09:06.178 No valid GPT data, bailing 00:09:06.178 13:14:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:09:06.178 13:14:06 -- scripts/common.sh@394 -- # pt= 00:09:06.178 13:14:06 -- scripts/common.sh@395 -- # return 1 00:09:06.178 13:14:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:09:06.178 1+0 records in 00:09:06.178 1+0 records out 00:09:06.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436172 s, 240 MB/s 00:09:06.178 13:14:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:06.178 13:14:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:06.178 13:14:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:09:06.178 13:14:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:09:06.178 13:14:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:09:06.178 No valid GPT data, bailing 00:09:06.178 13:14:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:09:06.178 13:14:06 -- scripts/common.sh@394 -- # pt= 00:09:06.178 13:14:06 -- scripts/common.sh@395 -- # return 1 00:09:06.178 13:14:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:09:06.178 1+0 records in 00:09:06.178 1+0 records out 00:09:06.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00429548 s, 244 MB/s 00:09:06.178 13:14:06 -- spdk/autotest.sh@105 -- # sync 00:09:06.178 13:14:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:06.178 13:14:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:06.178 13:14:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:07.113 13:14:08 -- spdk/autotest.sh@111 -- # uname -s 00:09:07.113 13:14:08 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:07.113 13:14:08 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:07.113 13:14:08 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:07.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:07.679 Hugepages 00:09:07.679 node hugesize free / total 00:09:07.679 node0 1048576kB 0 / 0 00:09:07.679 node0 2048kB 0 / 0 00:09:07.679 00:09:07.679 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:07.679 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:07.679 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:07.937 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:09:07.937 13:14:09 -- spdk/autotest.sh@117 -- # uname -s 00:09:07.937 13:14:09 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:07.937 13:14:09 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:07.937 13:14:09 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:08.504 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:08.504 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:08.762 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:08.762 13:14:10 -- common/autotest_common.sh@1515 -- # sleep 1 00:09:09.740 13:14:11 -- common/autotest_common.sh@1516 -- # bdfs=() 00:09:09.740 13:14:11 -- common/autotest_common.sh@1516 -- # local bdfs 00:09:09.740 13:14:11 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:09:09.740 13:14:11 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:09:09.740 13:14:11 -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:09.740 13:14:11 -- common/autotest_common.sh@1496 -- # local bdfs 00:09:09.740 13:14:11 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:09.740 13:14:11 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:09.740 13:14:11 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:09.740 13:14:11 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:09:09.740 13:14:11 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:09.740 13:14:11 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:09.999 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:10.257 Waiting for block devices as requested 00:09:10.257 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.257 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.257 13:14:12 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:09:10.257 13:14:12 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:10.257 13:14:12 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:10.258 13:14:12 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:09:10.258 13:14:12 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:10.258 13:14:12 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:10.258 13:14:12 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:10.258 13:14:12 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:09:10.258 13:14:12 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:09:10.258 13:14:12 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:09:10.258 13:14:12 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:09:10.258 13:14:12 -- common/autotest_common.sh@1529 -- # grep oacs 00:09:10.258 13:14:12 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:09:10.258 13:14:12 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:09:10.258 13:14:12 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:09:10.258 13:14:12 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:09:10.258 13:14:12 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:09:10.258 13:14:12 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:09:10.258 13:14:12 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:09:10.258 13:14:12 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:09:10.258 13:14:12 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:09:10.258 13:14:12 -- common/autotest_common.sh@1541 -- # continue 00:09:10.258 13:14:12 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:09:10.258 13:14:12 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:10.258 13:14:12 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:10.258 13:14:12 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:09:10.258 13:14:12 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:10.516 13:14:12 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:10.516 13:14:12 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:10.516 13:14:12 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:09:10.516 13:14:12 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:09:10.516 13:14:12 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:09:10.516 13:14:12 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:09:10.516 13:14:12 -- common/autotest_common.sh@1529 -- # grep oacs 00:09:10.516 13:14:12 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:09:10.516 13:14:12 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:09:10.516 13:14:12 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:09:10.516 13:14:12 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:09:10.516 13:14:12 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:09:10.516 13:14:12 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:09:10.516 13:14:12 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:09:10.516 13:14:12 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:09:10.516 13:14:12 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:09:10.516 13:14:12 -- common/autotest_common.sh@1541 -- # continue 00:09:10.516 13:14:12 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:10.516 13:14:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:10.516 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:09:10.516 13:14:12 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:10.516 13:14:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.516 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:09:10.516 13:14:12 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:11.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:11.084 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:11.342 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:11.342 13:14:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:11.342 13:14:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:11.342 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:09:11.342 13:14:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:11.342 13:14:13 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:09:11.342 13:14:13 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:09:11.342 13:14:13 -- common/autotest_common.sh@1561 -- # bdfs=() 00:09:11.342 13:14:13 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:09:11.342 13:14:13 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:09:11.342 13:14:13 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:09:11.342 13:14:13 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:09:11.342 13:14:13 -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:11.342 13:14:13 -- common/autotest_common.sh@1496 -- # local bdfs 00:09:11.342 13:14:13 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:11.342 13:14:13 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:11.342 13:14:13 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:11.342 13:14:13 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:09:11.342 13:14:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:11.342 13:14:13 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:09:11.342 13:14:13 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:11.342 13:14:13 -- common/autotest_common.sh@1564 -- # device=0x0010 00:09:11.342 13:14:13 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:11.342 13:14:13 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:09:11.342 13:14:13 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:11.342 13:14:13 -- common/autotest_common.sh@1564 -- # device=0x0010 00:09:11.342 13:14:13 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:11.342 13:14:13 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:09:11.343 13:14:13 -- common/autotest_common.sh@1570 -- # return 0 00:09:11.343 13:14:13 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:09:11.343 13:14:13 -- common/autotest_common.sh@1578 -- # return 0 00:09:11.343 13:14:13 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:11.343 13:14:13 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:11.343 13:14:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:11.343 13:14:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:11.343 13:14:13 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:11.343 13:14:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:11.343 13:14:13 -- common/autotest_common.sh@10 -- # set +x 00:09:11.343 13:14:13 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:09:11.343 13:14:13 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:09:11.343 13:14:13 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:09:11.343 13:14:13 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:11.343 13:14:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.343 13:14:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.343 13:14:13 -- common/autotest_common.sh@10 -- # set +x 00:09:11.343 ************************************ 00:09:11.343 START TEST env 00:09:11.343 ************************************ 00:09:11.343 13:14:13 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:11.602 * Looking for test storage... 00:09:11.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:11.602 13:14:13 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:11.602 13:14:13 env -- common/autotest_common.sh@1681 -- # lcov --version 00:09:11.602 13:14:13 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:11.602 13:14:13 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:11.602 13:14:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.602 13:14:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.602 13:14:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.602 13:14:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.602 13:14:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.602 13:14:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.602 13:14:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.602 13:14:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.602 13:14:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.602 13:14:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.602 13:14:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.602 13:14:13 env -- scripts/common.sh@344 -- # case "$op" in 00:09:11.602 13:14:13 env -- scripts/common.sh@345 -- # : 1 00:09:11.602 13:14:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.602 13:14:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.602 13:14:13 env -- scripts/common.sh@365 -- # decimal 1 00:09:11.602 13:14:13 env -- scripts/common.sh@353 -- # local d=1 00:09:11.602 13:14:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.602 13:14:13 env -- scripts/common.sh@355 -- # echo 1 00:09:11.602 13:14:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.602 13:14:13 env -- scripts/common.sh@366 -- # decimal 2 00:09:11.602 13:14:13 env -- scripts/common.sh@353 -- # local d=2 00:09:11.602 13:14:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.602 13:14:13 env -- scripts/common.sh@355 -- # echo 2 00:09:11.602 13:14:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.602 13:14:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.602 13:14:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.602 13:14:13 env -- scripts/common.sh@368 -- # return 0 00:09:11.602 13:14:13 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.602 13:14:13 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:11.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.602 --rc genhtml_branch_coverage=1 00:09:11.602 --rc genhtml_function_coverage=1 00:09:11.602 --rc genhtml_legend=1 00:09:11.602 --rc geninfo_all_blocks=1 00:09:11.602 --rc geninfo_unexecuted_blocks=1 00:09:11.602 00:09:11.602 ' 00:09:11.602 13:14:13 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:11.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.602 --rc genhtml_branch_coverage=1 00:09:11.602 --rc genhtml_function_coverage=1 00:09:11.602 --rc genhtml_legend=1 00:09:11.602 --rc geninfo_all_blocks=1 00:09:11.602 --rc geninfo_unexecuted_blocks=1 00:09:11.602 00:09:11.602 ' 00:09:11.602 13:14:13 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:11.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.602 --rc genhtml_branch_coverage=1 00:09:11.602 --rc genhtml_function_coverage=1 00:09:11.602 --rc genhtml_legend=1 00:09:11.602 --rc geninfo_all_blocks=1 00:09:11.602 --rc geninfo_unexecuted_blocks=1 00:09:11.602 00:09:11.602 ' 00:09:11.602 13:14:13 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:11.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.602 --rc genhtml_branch_coverage=1 00:09:11.602 --rc genhtml_function_coverage=1 00:09:11.602 --rc genhtml_legend=1 00:09:11.602 --rc geninfo_all_blocks=1 00:09:11.602 --rc geninfo_unexecuted_blocks=1 00:09:11.602 00:09:11.602 ' 00:09:11.602 13:14:13 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:11.602 13:14:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.602 13:14:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.602 13:14:13 env -- common/autotest_common.sh@10 -- # set +x 00:09:11.602 ************************************ 00:09:11.602 START TEST env_memory 00:09:11.602 ************************************ 00:09:11.602 13:14:13 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:11.602 00:09:11.602 00:09:11.602 CUnit - A unit testing framework for C - Version 2.1-3 00:09:11.602 http://cunit.sourceforge.net/ 00:09:11.602 00:09:11.602 00:09:11.602 Suite: memory 00:09:11.602 Test: alloc and free memory map ...[2024-09-27 13:14:13.365190] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:11.602 passed 00:09:11.602 Test: mem map translation ...[2024-09-27 13:14:13.396619] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:11.602 [2024-09-27 13:14:13.396666] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:11.602 [2024-09-27 13:14:13.396735] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:11.602 [2024-09-27 13:14:13.396749] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:11.861 passed 00:09:11.861 Test: mem map registration ...[2024-09-27 13:14:13.460363] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:11.861 [2024-09-27 13:14:13.460405] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:11.861 passed 00:09:11.861 Test: mem map adjacent registrations ...passed 00:09:11.861 00:09:11.861 Run Summary: Type Total Ran Passed Failed Inactive 00:09:11.861 suites 1 1 n/a 0 0 00:09:11.861 tests 4 4 4 0 0 00:09:11.861 asserts 152 152 152 0 n/a 00:09:11.861 00:09:11.861 Elapsed time = 0.200 seconds 00:09:11.861 00:09:11.861 real 0m0.216s 00:09:11.861 user 0m0.203s 00:09:11.861 sys 0m0.010s 00:09:11.861 13:14:13 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.861 13:14:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:11.861 ************************************ 00:09:11.861 END TEST env_memory 00:09:11.861 ************************************ 00:09:11.861 13:14:13 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:11.861 13:14:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.861 13:14:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.861 13:14:13 env -- common/autotest_common.sh@10 -- # set +x 00:09:11.861 ************************************ 00:09:11.861 START TEST env_vtophys 00:09:11.861 ************************************ 00:09:11.861 13:14:13 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:11.861 EAL: lib.eal log level changed from notice to debug 00:09:11.861 EAL: Detected lcore 0 as core 0 on socket 0 00:09:11.861 EAL: Detected lcore 1 as core 0 on socket 0 00:09:11.861 EAL: Detected lcore 2 as core 0 on socket 0 00:09:11.861 EAL: Detected lcore 3 as core 0 on socket 0 00:09:11.861 EAL: Detected lcore 4 as core 0 on socket 0 00:09:11.861 EAL: Detected lcore 5 as core 0 on socket 0 00:09:11.861 EAL: Detected lcore 6 as core 0 on socket 0 00:09:11.861 EAL: Detected lcore 7 as core 0 on socket 0 00:09:11.861 EAL: Detected lcore 8 as core 0 on socket 0 00:09:11.861 EAL: Detected lcore 9 as core 0 on socket 0 00:09:11.861 EAL: Maximum logical cores by configuration: 128 00:09:11.861 EAL: Detected CPU lcores: 10 00:09:11.861 EAL: Detected NUMA nodes: 1 00:09:11.861 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:11.861 EAL: Detected shared linkage of DPDK 00:09:11.861 EAL: No shared files mode enabled, IPC will be disabled 00:09:11.861 EAL: Selected IOVA mode 'PA' 00:09:11.861 EAL: Probing VFIO support... 00:09:11.861 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:11.861 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:11.861 EAL: Ask a virtual area of 0x2e000 bytes 00:09:11.861 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:11.861 EAL: Setting up physically contiguous memory... 00:09:11.862 EAL: Setting maximum number of open files to 524288 00:09:11.862 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:11.862 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:11.862 EAL: Ask a virtual area of 0x61000 bytes 00:09:11.862 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:11.862 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:11.862 EAL: Ask a virtual area of 0x400000000 bytes 00:09:11.862 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:11.862 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:11.862 EAL: Ask a virtual area of 0x61000 bytes 00:09:11.862 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:11.862 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:11.862 EAL: Ask a virtual area of 0x400000000 bytes 00:09:11.862 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:11.862 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:11.862 EAL: Ask a virtual area of 0x61000 bytes 00:09:11.862 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:11.862 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:11.862 EAL: Ask a virtual area of 0x400000000 bytes 00:09:11.862 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:11.862 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:11.862 EAL: Ask a virtual area of 0x61000 bytes 00:09:11.862 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:11.862 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:11.862 EAL: Ask a virtual area of 0x400000000 bytes 00:09:11.862 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:11.862 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:11.862 EAL: Hugepages will be freed exactly as allocated. 00:09:11.862 EAL: No shared files mode enabled, IPC is disabled 00:09:11.862 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: TSC frequency is ~2200000 KHz 00:09:12.121 EAL: Main lcore 0 is ready (tid=7f627fc2da00;cpuset=[0]) 00:09:12.121 EAL: Trying to obtain current memory policy. 00:09:12.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.121 EAL: Restoring previous memory policy: 0 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was expanded by 2MB 00:09:12.121 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:12.121 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:12.121 EAL: Mem event callback 'spdk:(nil)' registered 00:09:12.121 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:12.121 00:09:12.121 00:09:12.121 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.121 http://cunit.sourceforge.net/ 00:09:12.121 00:09:12.121 00:09:12.121 Suite: components_suite 00:09:12.121 Test: vtophys_malloc_test ...passed 00:09:12.121 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:12.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.121 EAL: Restoring previous memory policy: 4 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was expanded by 4MB 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was shrunk by 4MB 00:09:12.121 EAL: Trying to obtain current memory policy. 00:09:12.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.121 EAL: Restoring previous memory policy: 4 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was expanded by 6MB 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was shrunk by 6MB 00:09:12.121 EAL: Trying to obtain current memory policy. 00:09:12.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.121 EAL: Restoring previous memory policy: 4 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was expanded by 10MB 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was shrunk by 10MB 00:09:12.121 EAL: Trying to obtain current memory policy. 00:09:12.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.121 EAL: Restoring previous memory policy: 4 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was expanded by 18MB 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was shrunk by 18MB 00:09:12.121 EAL: Trying to obtain current memory policy. 00:09:12.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.121 EAL: Restoring previous memory policy: 4 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was expanded by 34MB 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was shrunk by 34MB 00:09:12.121 EAL: Trying to obtain current memory policy. 00:09:12.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.121 EAL: Restoring previous memory policy: 4 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was expanded by 66MB 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was shrunk by 66MB 00:09:12.121 EAL: Trying to obtain current memory policy. 00:09:12.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.121 EAL: Restoring previous memory policy: 4 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was expanded by 130MB 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was shrunk by 130MB 00:09:12.121 EAL: Trying to obtain current memory policy. 00:09:12.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.121 EAL: Restoring previous memory policy: 4 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was expanded by 258MB 00:09:12.121 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.121 EAL: request: mp_malloc_sync 00:09:12.121 EAL: No shared files mode enabled, IPC is disabled 00:09:12.121 EAL: Heap on socket 0 was shrunk by 258MB 00:09:12.121 EAL: Trying to obtain current memory policy. 00:09:12.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.381 EAL: Restoring previous memory policy: 4 00:09:12.381 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.381 EAL: request: mp_malloc_sync 00:09:12.381 EAL: No shared files mode enabled, IPC is disabled 00:09:12.381 EAL: Heap on socket 0 was expanded by 514MB 00:09:12.381 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.381 EAL: request: mp_malloc_sync 00:09:12.381 EAL: No shared files mode enabled, IPC is disabled 00:09:12.381 EAL: Heap on socket 0 was shrunk by 514MB 00:09:12.381 EAL: Trying to obtain current memory policy. 00:09:12.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.640 EAL: Restoring previous memory policy: 4 00:09:12.640 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.640 EAL: request: mp_malloc_sync 00:09:12.640 EAL: No shared files mode enabled, IPC is disabled 00:09:12.640 EAL: Heap on socket 0 was expanded by 1026MB 00:09:12.640 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.640 passed 00:09:12.640 00:09:12.640 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.640 suites 1 1 n/a 0 0 00:09:12.640 tests 2 2 2 0 0 00:09:12.640 asserts 5407 5407 5407 0 n/a 00:09:12.640 00:09:12.640 Elapsed time = 0.707 seconds 00:09:12.640 EAL: request: mp_malloc_sync 00:09:12.640 EAL: No shared files mode enabled, IPC is disabled 00:09:12.640 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:12.640 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.640 EAL: request: mp_malloc_sync 00:09:12.640 EAL: No shared files mode enabled, IPC is disabled 00:09:12.640 EAL: Heap on socket 0 was shrunk by 2MB 00:09:12.640 EAL: No shared files mode enabled, IPC is disabled 00:09:12.640 EAL: No shared files mode enabled, IPC is disabled 00:09:12.640 EAL: No shared files mode enabled, IPC is disabled 00:09:12.640 00:09:12.640 real 0m0.899s 00:09:12.640 user 0m0.467s 00:09:12.640 sys 0m0.304s 00:09:12.899 13:14:14 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.899 13:14:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:12.899 ************************************ 00:09:12.899 END TEST env_vtophys 00:09:12.899 ************************************ 00:09:12.899 13:14:14 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:12.899 13:14:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:12.899 13:14:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.899 13:14:14 env -- common/autotest_common.sh@10 -- # set +x 00:09:12.899 ************************************ 00:09:12.899 START TEST env_pci 00:09:12.899 ************************************ 00:09:12.899 13:14:14 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:12.899 00:09:12.899 00:09:12.899 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.899 http://cunit.sourceforge.net/ 00:09:12.899 00:09:12.899 00:09:12.899 Suite: pci 00:09:12.899 Test: pci_hook ...[2024-09-27 13:14:14.559651] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56611 has claimed it 00:09:12.899 passed 00:09:12.899 00:09:12.899 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.899 suites 1 1 n/a 0 0 00:09:12.899 tests 1 1 1 0 0 00:09:12.899 asserts 25 25 25 0 n/a 00:09:12.899 00:09:12.899 Elapsed time = 0.003 seconds 00:09:12.899 EAL: Cannot find device (10000:00:01.0) 00:09:12.899 EAL: Failed to attach device on primary process 00:09:12.899 00:09:12.899 real 0m0.029s 00:09:12.899 user 0m0.014s 00:09:12.899 sys 0m0.014s 00:09:12.899 13:14:14 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.899 13:14:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:12.899 ************************************ 00:09:12.899 END TEST env_pci 00:09:12.899 ************************************ 00:09:12.899 13:14:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:12.899 13:14:14 env -- env/env.sh@15 -- # uname 00:09:12.899 13:14:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:12.899 13:14:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:12.899 13:14:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:12.899 13:14:14 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:12.899 13:14:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.899 13:14:14 env -- common/autotest_common.sh@10 -- # set +x 00:09:12.899 ************************************ 00:09:12.899 START TEST env_dpdk_post_init 00:09:12.899 ************************************ 00:09:12.899 13:14:14 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:12.899 EAL: Detected CPU lcores: 10 00:09:12.899 EAL: Detected NUMA nodes: 1 00:09:12.899 EAL: Detected shared linkage of DPDK 00:09:12.899 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:12.899 EAL: Selected IOVA mode 'PA' 00:09:13.157 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:13.157 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:13.157 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:13.157 Starting DPDK initialization... 00:09:13.157 Starting SPDK post initialization... 00:09:13.157 SPDK NVMe probe 00:09:13.157 Attaching to 0000:00:10.0 00:09:13.157 Attaching to 0000:00:11.0 00:09:13.157 Attached to 0000:00:10.0 00:09:13.157 Attached to 0000:00:11.0 00:09:13.157 Cleaning up... 00:09:13.157 00:09:13.157 real 0m0.170s 00:09:13.157 user 0m0.045s 00:09:13.157 sys 0m0.026s 00:09:13.157 13:14:14 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.157 13:14:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:13.157 ************************************ 00:09:13.157 END TEST env_dpdk_post_init 00:09:13.157 ************************************ 00:09:13.157 13:14:14 env -- env/env.sh@26 -- # uname 00:09:13.157 13:14:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:13.157 13:14:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:13.157 13:14:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:13.157 13:14:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.157 13:14:14 env -- common/autotest_common.sh@10 -- # set +x 00:09:13.157 ************************************ 00:09:13.157 START TEST env_mem_callbacks 00:09:13.157 ************************************ 00:09:13.157 13:14:14 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:13.157 EAL: Detected CPU lcores: 10 00:09:13.157 EAL: Detected NUMA nodes: 1 00:09:13.157 EAL: Detected shared linkage of DPDK 00:09:13.157 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:13.157 EAL: Selected IOVA mode 'PA' 00:09:13.157 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:13.157 00:09:13.157 00:09:13.157 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.157 http://cunit.sourceforge.net/ 00:09:13.157 00:09:13.157 00:09:13.157 Suite: memory 00:09:13.157 Test: test ... 00:09:13.157 register 0x200000200000 2097152 00:09:13.157 malloc 3145728 00:09:13.157 register 0x200000400000 4194304 00:09:13.157 buf 0x200000500000 len 3145728 PASSED 00:09:13.157 malloc 64 00:09:13.157 buf 0x2000004fff40 len 64 PASSED 00:09:13.157 malloc 4194304 00:09:13.157 register 0x200000800000 6291456 00:09:13.157 buf 0x200000a00000 len 4194304 PASSED 00:09:13.157 free 0x200000500000 3145728 00:09:13.157 free 0x2000004fff40 64 00:09:13.157 unregister 0x200000400000 4194304 PASSED 00:09:13.157 free 0x200000a00000 4194304 00:09:13.157 unregister 0x200000800000 6291456 PASSED 00:09:13.157 malloc 8388608 00:09:13.157 register 0x200000400000 10485760 00:09:13.157 buf 0x200000600000 len 8388608 PASSED 00:09:13.157 free 0x200000600000 8388608 00:09:13.157 unregister 0x200000400000 10485760 PASSED 00:09:13.157 passed 00:09:13.157 00:09:13.157 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.157 suites 1 1 n/a 0 0 00:09:13.157 tests 1 1 1 0 0 00:09:13.157 asserts 15 15 15 0 n/a 00:09:13.157 00:09:13.157 Elapsed time = 0.008 seconds 00:09:13.158 00:09:13.158 real 0m0.143s 00:09:13.158 user 0m0.014s 00:09:13.158 sys 0m0.027s 00:09:13.158 13:14:14 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.158 13:14:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:13.158 ************************************ 00:09:13.158 END TEST env_mem_callbacks 00:09:13.158 ************************************ 00:09:13.416 00:09:13.416 real 0m1.905s 00:09:13.416 user 0m0.944s 00:09:13.416 sys 0m0.622s 00:09:13.416 13:14:15 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.416 ************************************ 00:09:13.416 13:14:15 env -- common/autotest_common.sh@10 -- # set +x 00:09:13.416 END TEST env 00:09:13.416 ************************************ 00:09:13.416 13:14:15 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:13.416 13:14:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:13.416 13:14:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.416 13:14:15 -- common/autotest_common.sh@10 -- # set +x 00:09:13.416 ************************************ 00:09:13.416 START TEST rpc 00:09:13.416 ************************************ 00:09:13.416 13:14:15 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:13.416 * Looking for test storage... 00:09:13.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:13.416 13:14:15 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:13.416 13:14:15 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:13.416 13:14:15 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:13.416 13:14:15 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:13.416 13:14:15 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.416 13:14:15 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.416 13:14:15 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.416 13:14:15 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.416 13:14:15 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.416 13:14:15 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.416 13:14:15 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.416 13:14:15 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.416 13:14:15 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.416 13:14:15 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.416 13:14:15 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.416 13:14:15 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:13.416 13:14:15 rpc -- scripts/common.sh@345 -- # : 1 00:09:13.416 13:14:15 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.417 13:14:15 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.417 13:14:15 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:13.417 13:14:15 rpc -- scripts/common.sh@353 -- # local d=1 00:09:13.417 13:14:15 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.417 13:14:15 rpc -- scripts/common.sh@355 -- # echo 1 00:09:13.417 13:14:15 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.417 13:14:15 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:13.417 13:14:15 rpc -- scripts/common.sh@353 -- # local d=2 00:09:13.417 13:14:15 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.417 13:14:15 rpc -- scripts/common.sh@355 -- # echo 2 00:09:13.417 13:14:15 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.417 13:14:15 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.417 13:14:15 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.417 13:14:15 rpc -- scripts/common.sh@368 -- # return 0 00:09:13.417 13:14:15 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.417 13:14:15 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:13.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.417 --rc genhtml_branch_coverage=1 00:09:13.417 --rc genhtml_function_coverage=1 00:09:13.417 --rc genhtml_legend=1 00:09:13.417 --rc geninfo_all_blocks=1 00:09:13.417 --rc geninfo_unexecuted_blocks=1 00:09:13.417 00:09:13.417 ' 00:09:13.417 13:14:15 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:13.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.417 --rc genhtml_branch_coverage=1 00:09:13.417 --rc genhtml_function_coverage=1 00:09:13.417 --rc genhtml_legend=1 00:09:13.417 --rc geninfo_all_blocks=1 00:09:13.417 --rc geninfo_unexecuted_blocks=1 00:09:13.417 00:09:13.417 ' 00:09:13.417 13:14:15 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:13.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.417 --rc genhtml_branch_coverage=1 00:09:13.417 --rc genhtml_function_coverage=1 00:09:13.417 --rc genhtml_legend=1 00:09:13.417 --rc geninfo_all_blocks=1 00:09:13.417 --rc geninfo_unexecuted_blocks=1 00:09:13.417 00:09:13.417 ' 00:09:13.417 13:14:15 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:13.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.417 --rc genhtml_branch_coverage=1 00:09:13.417 --rc genhtml_function_coverage=1 00:09:13.417 --rc genhtml_legend=1 00:09:13.417 --rc geninfo_all_blocks=1 00:09:13.417 --rc geninfo_unexecuted_blocks=1 00:09:13.417 00:09:13.417 ' 00:09:13.417 13:14:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56728 00:09:13.417 13:14:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:13.417 13:14:15 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:13.417 13:14:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56728 00:09:13.417 13:14:15 rpc -- common/autotest_common.sh@831 -- # '[' -z 56728 ']' 00:09:13.417 13:14:15 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.417 13:14:15 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.417 13:14:15 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.417 13:14:15 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.417 13:14:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.675 [2024-09-27 13:14:15.308168] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:13.675 [2024-09-27 13:14:15.308271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56728 ] 00:09:13.675 [2024-09-27 13:14:15.441142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.675 [2024-09-27 13:14:15.500978] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:13.675 [2024-09-27 13:14:15.501033] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56728' to capture a snapshot of events at runtime. 00:09:13.675 [2024-09-27 13:14:15.501045] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.675 [2024-09-27 13:14:15.501053] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.675 [2024-09-27 13:14:15.501061] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56728 for offline analysis/debug. 00:09:13.675 [2024-09-27 13:14:15.501089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.933 [2024-09-27 13:14:15.542074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.499 13:14:16 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.499 13:14:16 rpc -- common/autotest_common.sh@864 -- # return 0 00:09:14.500 13:14:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:14.500 13:14:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:14.500 13:14:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:14.500 13:14:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:14.500 13:14:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:14.500 13:14:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.500 13:14:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.500 ************************************ 00:09:14.500 START TEST rpc_integrity 00:09:14.500 ************************************ 00:09:14.500 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:09:14.500 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:14.500 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.500 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:14.500 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.500 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:14.500 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:14.758 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:14.758 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:14.758 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.758 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:14.758 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.758 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:14.758 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:14.758 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.758 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:14.758 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.758 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:14.758 { 00:09:14.758 "name": "Malloc0", 00:09:14.758 "aliases": [ 00:09:14.758 "4741a3c7-a0b5-4f8b-9202-eae58249ee1a" 00:09:14.758 ], 00:09:14.758 "product_name": "Malloc disk", 00:09:14.758 "block_size": 512, 00:09:14.758 "num_blocks": 16384, 00:09:14.758 "uuid": "4741a3c7-a0b5-4f8b-9202-eae58249ee1a", 00:09:14.758 "assigned_rate_limits": { 00:09:14.758 "rw_ios_per_sec": 0, 00:09:14.758 "rw_mbytes_per_sec": 0, 00:09:14.758 "r_mbytes_per_sec": 0, 00:09:14.758 "w_mbytes_per_sec": 0 00:09:14.758 }, 00:09:14.758 "claimed": false, 00:09:14.758 "zoned": false, 00:09:14.758 "supported_io_types": { 00:09:14.758 "read": true, 00:09:14.758 "write": true, 00:09:14.758 "unmap": true, 00:09:14.758 "flush": true, 00:09:14.758 "reset": true, 00:09:14.758 "nvme_admin": false, 00:09:14.758 "nvme_io": false, 00:09:14.758 "nvme_io_md": false, 00:09:14.758 "write_zeroes": true, 00:09:14.758 "zcopy": true, 00:09:14.758 "get_zone_info": false, 00:09:14.758 "zone_management": false, 00:09:14.758 "zone_append": false, 00:09:14.758 "compare": false, 00:09:14.758 "compare_and_write": false, 00:09:14.758 "abort": true, 00:09:14.758 "seek_hole": false, 00:09:14.758 "seek_data": false, 00:09:14.758 "copy": true, 00:09:14.758 "nvme_iov_md": false 00:09:14.758 }, 00:09:14.758 "memory_domains": [ 00:09:14.758 { 00:09:14.758 "dma_device_id": "system", 00:09:14.758 "dma_device_type": 1 00:09:14.758 }, 00:09:14.758 { 00:09:14.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.758 "dma_device_type": 2 00:09:14.758 } 00:09:14.758 ], 00:09:14.758 "driver_specific": {} 00:09:14.758 } 00:09:14.758 ]' 00:09:14.758 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:14.758 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:14.758 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:14.758 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.758 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:14.758 [2024-09-27 13:14:16.468764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:14.758 [2024-09-27 13:14:16.468810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.758 [2024-09-27 13:14:16.468829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13a22c0 00:09:14.758 [2024-09-27 13:14:16.468838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.758 [2024-09-27 13:14:16.470396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.758 [2024-09-27 13:14:16.470430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:14.758 Passthru0 00:09:14.758 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.758 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:14.758 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.758 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:14.758 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.758 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:14.758 { 00:09:14.758 "name": "Malloc0", 00:09:14.758 "aliases": [ 00:09:14.758 "4741a3c7-a0b5-4f8b-9202-eae58249ee1a" 00:09:14.758 ], 00:09:14.758 "product_name": "Malloc disk", 00:09:14.758 "block_size": 512, 00:09:14.758 "num_blocks": 16384, 00:09:14.758 "uuid": "4741a3c7-a0b5-4f8b-9202-eae58249ee1a", 00:09:14.758 "assigned_rate_limits": { 00:09:14.758 "rw_ios_per_sec": 0, 00:09:14.758 "rw_mbytes_per_sec": 0, 00:09:14.758 "r_mbytes_per_sec": 0, 00:09:14.758 "w_mbytes_per_sec": 0 00:09:14.758 }, 00:09:14.759 "claimed": true, 00:09:14.759 "claim_type": "exclusive_write", 00:09:14.759 "zoned": false, 00:09:14.759 "supported_io_types": { 00:09:14.759 "read": true, 00:09:14.759 "write": true, 00:09:14.759 "unmap": true, 00:09:14.759 "flush": true, 00:09:14.759 "reset": true, 00:09:14.759 "nvme_admin": false, 00:09:14.759 "nvme_io": false, 00:09:14.759 "nvme_io_md": false, 00:09:14.759 "write_zeroes": true, 00:09:14.759 "zcopy": true, 00:09:14.759 "get_zone_info": false, 00:09:14.759 "zone_management": false, 00:09:14.759 "zone_append": false, 00:09:14.759 "compare": false, 00:09:14.759 "compare_and_write": false, 00:09:14.759 "abort": true, 00:09:14.759 "seek_hole": false, 00:09:14.759 "seek_data": false, 00:09:14.759 "copy": true, 00:09:14.759 "nvme_iov_md": false 00:09:14.759 }, 00:09:14.759 "memory_domains": [ 00:09:14.759 { 00:09:14.759 "dma_device_id": "system", 00:09:14.759 "dma_device_type": 1 00:09:14.759 }, 00:09:14.759 { 00:09:14.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.759 "dma_device_type": 2 00:09:14.759 } 00:09:14.759 ], 00:09:14.759 "driver_specific": {} 00:09:14.759 }, 00:09:14.759 { 00:09:14.759 "name": "Passthru0", 00:09:14.759 "aliases": [ 00:09:14.759 "6a586e6a-b464-57b6-9dd3-5349d1250417" 00:09:14.759 ], 00:09:14.759 "product_name": "passthru", 00:09:14.759 "block_size": 512, 00:09:14.759 "num_blocks": 16384, 00:09:14.759 "uuid": "6a586e6a-b464-57b6-9dd3-5349d1250417", 00:09:14.759 "assigned_rate_limits": { 00:09:14.759 "rw_ios_per_sec": 0, 00:09:14.759 "rw_mbytes_per_sec": 0, 00:09:14.759 "r_mbytes_per_sec": 0, 00:09:14.759 "w_mbytes_per_sec": 0 00:09:14.759 }, 00:09:14.759 "claimed": false, 00:09:14.759 "zoned": false, 00:09:14.759 "supported_io_types": { 00:09:14.759 "read": true, 00:09:14.759 "write": true, 00:09:14.759 "unmap": true, 00:09:14.759 "flush": true, 00:09:14.759 "reset": true, 00:09:14.759 "nvme_admin": false, 00:09:14.759 "nvme_io": false, 00:09:14.759 "nvme_io_md": false, 00:09:14.759 "write_zeroes": true, 00:09:14.759 "zcopy": true, 00:09:14.759 "get_zone_info": false, 00:09:14.759 "zone_management": false, 00:09:14.759 "zone_append": false, 00:09:14.759 "compare": false, 00:09:14.759 "compare_and_write": false, 00:09:14.759 "abort": true, 00:09:14.759 "seek_hole": false, 00:09:14.759 "seek_data": false, 00:09:14.759 "copy": true, 00:09:14.759 "nvme_iov_md": false 00:09:14.759 }, 00:09:14.759 "memory_domains": [ 00:09:14.759 { 00:09:14.759 "dma_device_id": "system", 00:09:14.759 "dma_device_type": 1 00:09:14.759 }, 00:09:14.759 { 00:09:14.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.759 "dma_device_type": 2 00:09:14.759 } 00:09:14.759 ], 00:09:14.759 "driver_specific": { 00:09:14.759 "passthru": { 00:09:14.759 "name": "Passthru0", 00:09:14.759 "base_bdev_name": "Malloc0" 00:09:14.759 } 00:09:14.759 } 00:09:14.759 } 00:09:14.759 ]' 00:09:14.759 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:14.759 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:14.759 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:14.759 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.759 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:14.759 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.759 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:14.759 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.759 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:14.759 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.759 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:14.759 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.759 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:14.759 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.759 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:14.759 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:15.018 13:14:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:15.018 00:09:15.018 real 0m0.314s 00:09:15.018 user 0m0.214s 00:09:15.018 sys 0m0.037s 00:09:15.018 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.018 ************************************ 00:09:15.018 END TEST rpc_integrity 00:09:15.018 13:14:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.018 ************************************ 00:09:15.018 13:14:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:15.018 13:14:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:15.018 13:14:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.018 13:14:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.018 ************************************ 00:09:15.018 START TEST rpc_plugins 00:09:15.018 ************************************ 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:09:15.018 13:14:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.018 13:14:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:15.018 13:14:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.018 13:14:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:15.018 { 00:09:15.018 "name": "Malloc1", 00:09:15.018 "aliases": [ 00:09:15.018 "9c22d224-9c3e-433b-85ad-428a1241ea0b" 00:09:15.018 ], 00:09:15.018 "product_name": "Malloc disk", 00:09:15.018 "block_size": 4096, 00:09:15.018 "num_blocks": 256, 00:09:15.018 "uuid": "9c22d224-9c3e-433b-85ad-428a1241ea0b", 00:09:15.018 "assigned_rate_limits": { 00:09:15.018 "rw_ios_per_sec": 0, 00:09:15.018 "rw_mbytes_per_sec": 0, 00:09:15.018 "r_mbytes_per_sec": 0, 00:09:15.018 "w_mbytes_per_sec": 0 00:09:15.018 }, 00:09:15.018 "claimed": false, 00:09:15.018 "zoned": false, 00:09:15.018 "supported_io_types": { 00:09:15.018 "read": true, 00:09:15.018 "write": true, 00:09:15.018 "unmap": true, 00:09:15.018 "flush": true, 00:09:15.018 "reset": true, 00:09:15.018 "nvme_admin": false, 00:09:15.018 "nvme_io": false, 00:09:15.018 "nvme_io_md": false, 00:09:15.018 "write_zeroes": true, 00:09:15.018 "zcopy": true, 00:09:15.018 "get_zone_info": false, 00:09:15.018 "zone_management": false, 00:09:15.018 "zone_append": false, 00:09:15.018 "compare": false, 00:09:15.018 "compare_and_write": false, 00:09:15.018 "abort": true, 00:09:15.018 "seek_hole": false, 00:09:15.018 "seek_data": false, 00:09:15.018 "copy": true, 00:09:15.018 "nvme_iov_md": false 00:09:15.018 }, 00:09:15.018 "memory_domains": [ 00:09:15.018 { 00:09:15.018 "dma_device_id": "system", 00:09:15.018 "dma_device_type": 1 00:09:15.018 }, 00:09:15.018 { 00:09:15.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.018 "dma_device_type": 2 00:09:15.018 } 00:09:15.018 ], 00:09:15.018 "driver_specific": {} 00:09:15.018 } 00:09:15.018 ]' 00:09:15.018 13:14:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:15.018 13:14:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:15.018 13:14:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.018 13:14:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.018 13:14:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:15.018 13:14:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:15.018 13:14:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:15.018 00:09:15.018 real 0m0.165s 00:09:15.018 user 0m0.105s 00:09:15.018 sys 0m0.018s 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.018 13:14:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:15.018 ************************************ 00:09:15.018 END TEST rpc_plugins 00:09:15.018 ************************************ 00:09:15.277 13:14:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:15.277 13:14:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:15.277 13:14:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.277 13:14:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.277 ************************************ 00:09:15.277 START TEST rpc_trace_cmd_test 00:09:15.277 ************************************ 00:09:15.277 13:14:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:09:15.277 13:14:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:15.277 13:14:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:15.277 13:14:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.277 13:14:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.277 13:14:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.277 13:14:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:15.277 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56728", 00:09:15.277 "tpoint_group_mask": "0x8", 00:09:15.277 "iscsi_conn": { 00:09:15.277 "mask": "0x2", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "scsi": { 00:09:15.277 "mask": "0x4", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "bdev": { 00:09:15.277 "mask": "0x8", 00:09:15.277 "tpoint_mask": "0xffffffffffffffff" 00:09:15.277 }, 00:09:15.277 "nvmf_rdma": { 00:09:15.277 "mask": "0x10", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "nvmf_tcp": { 00:09:15.277 "mask": "0x20", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "ftl": { 00:09:15.277 "mask": "0x40", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "blobfs": { 00:09:15.277 "mask": "0x80", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "dsa": { 00:09:15.277 "mask": "0x200", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "thread": { 00:09:15.277 "mask": "0x400", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "nvme_pcie": { 00:09:15.277 "mask": "0x800", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "iaa": { 00:09:15.277 "mask": "0x1000", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "nvme_tcp": { 00:09:15.277 "mask": "0x2000", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "bdev_nvme": { 00:09:15.277 "mask": "0x4000", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "sock": { 00:09:15.277 "mask": "0x8000", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "blob": { 00:09:15.277 "mask": "0x10000", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 }, 00:09:15.277 "bdev_raid": { 00:09:15.277 "mask": "0x20000", 00:09:15.277 "tpoint_mask": "0x0" 00:09:15.277 } 00:09:15.277 }' 00:09:15.277 13:14:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:15.277 13:14:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:09:15.277 13:14:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:15.277 13:14:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:15.277 13:14:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:15.277 13:14:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:15.277 13:14:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:15.536 13:14:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:15.536 13:14:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:15.536 13:14:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:15.536 00:09:15.536 real 0m0.276s 00:09:15.536 user 0m0.242s 00:09:15.536 sys 0m0.023s 00:09:15.536 13:14:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.536 ************************************ 00:09:15.536 END TEST rpc_trace_cmd_test 00:09:15.536 13:14:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.536 ************************************ 00:09:15.536 13:14:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:15.536 13:14:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:15.536 13:14:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:15.536 13:14:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:15.536 13:14:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.536 13:14:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.536 ************************************ 00:09:15.536 START TEST rpc_daemon_integrity 00:09:15.536 ************************************ 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.536 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:15.536 { 00:09:15.536 "name": "Malloc2", 00:09:15.536 "aliases": [ 00:09:15.536 "07e185e7-a097-416e-8c46-c8830a169058" 00:09:15.536 ], 00:09:15.536 "product_name": "Malloc disk", 00:09:15.536 "block_size": 512, 00:09:15.536 "num_blocks": 16384, 00:09:15.536 "uuid": "07e185e7-a097-416e-8c46-c8830a169058", 00:09:15.536 "assigned_rate_limits": { 00:09:15.536 "rw_ios_per_sec": 0, 00:09:15.536 "rw_mbytes_per_sec": 0, 00:09:15.536 "r_mbytes_per_sec": 0, 00:09:15.536 "w_mbytes_per_sec": 0 00:09:15.536 }, 00:09:15.536 "claimed": false, 00:09:15.536 "zoned": false, 00:09:15.536 "supported_io_types": { 00:09:15.536 "read": true, 00:09:15.536 "write": true, 00:09:15.536 "unmap": true, 00:09:15.536 "flush": true, 00:09:15.536 "reset": true, 00:09:15.536 "nvme_admin": false, 00:09:15.536 "nvme_io": false, 00:09:15.536 "nvme_io_md": false, 00:09:15.536 "write_zeroes": true, 00:09:15.536 "zcopy": true, 00:09:15.537 "get_zone_info": false, 00:09:15.537 "zone_management": false, 00:09:15.537 "zone_append": false, 00:09:15.537 "compare": false, 00:09:15.537 "compare_and_write": false, 00:09:15.537 "abort": true, 00:09:15.537 "seek_hole": false, 00:09:15.537 "seek_data": false, 00:09:15.537 "copy": true, 00:09:15.537 "nvme_iov_md": false 00:09:15.537 }, 00:09:15.537 "memory_domains": [ 00:09:15.537 { 00:09:15.537 "dma_device_id": "system", 00:09:15.537 "dma_device_type": 1 00:09:15.537 }, 00:09:15.537 { 00:09:15.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.537 "dma_device_type": 2 00:09:15.537 } 00:09:15.537 ], 00:09:15.537 "driver_specific": {} 00:09:15.537 } 00:09:15.537 ]' 00:09:15.537 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:15.537 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:15.537 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:15.537 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.537 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.537 [2024-09-27 13:14:17.377092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:15.537 [2024-09-27 13:14:17.377148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.537 [2024-09-27 13:14:17.377168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14dcfe0 00:09:15.537 [2024-09-27 13:14:17.377178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.537 [2024-09-27 13:14:17.378658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.537 [2024-09-27 13:14:17.378847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:15.537 Passthru0 00:09:15.537 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.795 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:15.795 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.795 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.795 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.795 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:15.795 { 00:09:15.795 "name": "Malloc2", 00:09:15.795 "aliases": [ 00:09:15.795 "07e185e7-a097-416e-8c46-c8830a169058" 00:09:15.795 ], 00:09:15.795 "product_name": "Malloc disk", 00:09:15.795 "block_size": 512, 00:09:15.795 "num_blocks": 16384, 00:09:15.795 "uuid": "07e185e7-a097-416e-8c46-c8830a169058", 00:09:15.795 "assigned_rate_limits": { 00:09:15.795 "rw_ios_per_sec": 0, 00:09:15.795 "rw_mbytes_per_sec": 0, 00:09:15.795 "r_mbytes_per_sec": 0, 00:09:15.795 "w_mbytes_per_sec": 0 00:09:15.795 }, 00:09:15.795 "claimed": true, 00:09:15.795 "claim_type": "exclusive_write", 00:09:15.795 "zoned": false, 00:09:15.795 "supported_io_types": { 00:09:15.795 "read": true, 00:09:15.795 "write": true, 00:09:15.795 "unmap": true, 00:09:15.795 "flush": true, 00:09:15.795 "reset": true, 00:09:15.795 "nvme_admin": false, 00:09:15.795 "nvme_io": false, 00:09:15.795 "nvme_io_md": false, 00:09:15.795 "write_zeroes": true, 00:09:15.795 "zcopy": true, 00:09:15.795 "get_zone_info": false, 00:09:15.795 "zone_management": false, 00:09:15.795 "zone_append": false, 00:09:15.796 "compare": false, 00:09:15.796 "compare_and_write": false, 00:09:15.796 "abort": true, 00:09:15.796 "seek_hole": false, 00:09:15.796 "seek_data": false, 00:09:15.796 "copy": true, 00:09:15.796 "nvme_iov_md": false 00:09:15.796 }, 00:09:15.796 "memory_domains": [ 00:09:15.796 { 00:09:15.796 "dma_device_id": "system", 00:09:15.796 "dma_device_type": 1 00:09:15.796 }, 00:09:15.796 { 00:09:15.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.796 "dma_device_type": 2 00:09:15.796 } 00:09:15.796 ], 00:09:15.796 "driver_specific": {} 00:09:15.796 }, 00:09:15.796 { 00:09:15.796 "name": "Passthru0", 00:09:15.796 "aliases": [ 00:09:15.796 "70cd14ad-8ab7-5bb8-8081-d9d2aa196534" 00:09:15.796 ], 00:09:15.796 "product_name": "passthru", 00:09:15.796 "block_size": 512, 00:09:15.796 "num_blocks": 16384, 00:09:15.796 "uuid": "70cd14ad-8ab7-5bb8-8081-d9d2aa196534", 00:09:15.796 "assigned_rate_limits": { 00:09:15.796 "rw_ios_per_sec": 0, 00:09:15.796 "rw_mbytes_per_sec": 0, 00:09:15.796 "r_mbytes_per_sec": 0, 00:09:15.796 "w_mbytes_per_sec": 0 00:09:15.796 }, 00:09:15.796 "claimed": false, 00:09:15.796 "zoned": false, 00:09:15.796 "supported_io_types": { 00:09:15.796 "read": true, 00:09:15.796 "write": true, 00:09:15.796 "unmap": true, 00:09:15.796 "flush": true, 00:09:15.796 "reset": true, 00:09:15.796 "nvme_admin": false, 00:09:15.796 "nvme_io": false, 00:09:15.796 "nvme_io_md": false, 00:09:15.796 "write_zeroes": true, 00:09:15.796 "zcopy": true, 00:09:15.796 "get_zone_info": false, 00:09:15.796 "zone_management": false, 00:09:15.796 "zone_append": false, 00:09:15.796 "compare": false, 00:09:15.796 "compare_and_write": false, 00:09:15.796 "abort": true, 00:09:15.796 "seek_hole": false, 00:09:15.796 "seek_data": false, 00:09:15.796 "copy": true, 00:09:15.796 "nvme_iov_md": false 00:09:15.796 }, 00:09:15.796 "memory_domains": [ 00:09:15.796 { 00:09:15.796 "dma_device_id": "system", 00:09:15.796 "dma_device_type": 1 00:09:15.796 }, 00:09:15.796 { 00:09:15.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.796 "dma_device_type": 2 00:09:15.796 } 00:09:15.796 ], 00:09:15.796 "driver_specific": { 00:09:15.796 "passthru": { 00:09:15.796 "name": "Passthru0", 00:09:15.796 "base_bdev_name": "Malloc2" 00:09:15.796 } 00:09:15.796 } 00:09:15.796 } 00:09:15.796 ]' 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:15.796 ************************************ 00:09:15.796 END TEST rpc_daemon_integrity 00:09:15.796 ************************************ 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:15.796 00:09:15.796 real 0m0.319s 00:09:15.796 user 0m0.217s 00:09:15.796 sys 0m0.035s 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.796 13:14:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.796 13:14:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:15.796 13:14:17 rpc -- rpc/rpc.sh@84 -- # killprocess 56728 00:09:15.796 13:14:17 rpc -- common/autotest_common.sh@950 -- # '[' -z 56728 ']' 00:09:15.796 13:14:17 rpc -- common/autotest_common.sh@954 -- # kill -0 56728 00:09:15.796 13:14:17 rpc -- common/autotest_common.sh@955 -- # uname 00:09:15.796 13:14:17 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.796 13:14:17 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56728 00:09:15.796 13:14:17 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.796 13:14:17 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.796 killing process with pid 56728 00:09:15.796 13:14:17 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56728' 00:09:15.796 13:14:17 rpc -- common/autotest_common.sh@969 -- # kill 56728 00:09:15.796 13:14:17 rpc -- common/autotest_common.sh@974 -- # wait 56728 00:09:16.055 00:09:16.055 real 0m2.815s 00:09:16.055 user 0m3.817s 00:09:16.055 sys 0m0.548s 00:09:16.055 13:14:17 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.055 13:14:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.055 ************************************ 00:09:16.055 END TEST rpc 00:09:16.055 ************************************ 00:09:16.313 13:14:17 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:16.313 13:14:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:16.313 13:14:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.313 13:14:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.313 ************************************ 00:09:16.313 START TEST skip_rpc 00:09:16.313 ************************************ 00:09:16.313 13:14:17 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:16.313 * Looking for test storage... 00:09:16.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:16.313 13:14:18 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:16.313 13:14:18 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:16.313 13:14:18 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:16.313 13:14:18 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.313 13:14:18 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.314 13:14:18 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.314 13:14:18 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:16.314 13:14:18 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.314 13:14:18 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:16.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.314 --rc genhtml_branch_coverage=1 00:09:16.314 --rc genhtml_function_coverage=1 00:09:16.314 --rc genhtml_legend=1 00:09:16.314 --rc geninfo_all_blocks=1 00:09:16.314 --rc geninfo_unexecuted_blocks=1 00:09:16.314 00:09:16.314 ' 00:09:16.314 13:14:18 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:16.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.314 --rc genhtml_branch_coverage=1 00:09:16.314 --rc genhtml_function_coverage=1 00:09:16.314 --rc genhtml_legend=1 00:09:16.314 --rc geninfo_all_blocks=1 00:09:16.314 --rc geninfo_unexecuted_blocks=1 00:09:16.314 00:09:16.314 ' 00:09:16.314 13:14:18 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:16.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.314 --rc genhtml_branch_coverage=1 00:09:16.314 --rc genhtml_function_coverage=1 00:09:16.314 --rc genhtml_legend=1 00:09:16.314 --rc geninfo_all_blocks=1 00:09:16.314 --rc geninfo_unexecuted_blocks=1 00:09:16.314 00:09:16.314 ' 00:09:16.314 13:14:18 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:16.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.314 --rc genhtml_branch_coverage=1 00:09:16.314 --rc genhtml_function_coverage=1 00:09:16.314 --rc genhtml_legend=1 00:09:16.314 --rc geninfo_all_blocks=1 00:09:16.314 --rc geninfo_unexecuted_blocks=1 00:09:16.314 00:09:16.314 ' 00:09:16.314 13:14:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:16.314 13:14:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:16.314 13:14:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:16.314 13:14:18 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:16.314 13:14:18 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.314 13:14:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.314 ************************************ 00:09:16.314 START TEST skip_rpc 00:09:16.314 ************************************ 00:09:16.314 13:14:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:09:16.314 13:14:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56934 00:09:16.314 13:14:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:16.314 13:14:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:16.314 13:14:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:16.571 [2024-09-27 13:14:18.178105] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:16.571 [2024-09-27 13:14:18.178232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56934 ] 00:09:16.571 [2024-09-27 13:14:18.321678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.571 [2024-09-27 13:14:18.381365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.829 [2024-09-27 13:14:18.422095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56934 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 56934 ']' 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 56934 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56934 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.094 killing process with pid 56934 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56934' 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 56934 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 56934 00:09:22.094 00:09:22.094 real 0m5.311s 00:09:22.094 user 0m5.041s 00:09:22.094 sys 0m0.182s 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.094 13:14:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.094 ************************************ 00:09:22.094 END TEST skip_rpc 00:09:22.094 ************************************ 00:09:22.094 13:14:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:22.094 13:14:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.094 13:14:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.094 13:14:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.094 ************************************ 00:09:22.094 START TEST skip_rpc_with_json 00:09:22.094 ************************************ 00:09:22.094 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:09:22.094 13:14:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:22.094 13:14:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57015 00:09:22.094 13:14:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:22.094 13:14:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:22.094 13:14:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57015 00:09:22.094 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57015 ']' 00:09:22.094 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.094 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:22.095 [2024-09-27 13:14:23.528528] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:22.095 [2024-09-27 13:14:23.528649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57015 ] 00:09:22.095 [2024-09-27 13:14:23.668134] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.095 [2024-09-27 13:14:23.727814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.095 [2024-09-27 13:14:23.768822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:22.095 [2024-09-27 13:14:23.898082] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:22.095 request: 00:09:22.095 { 00:09:22.095 "trtype": "tcp", 00:09:22.095 "method": "nvmf_get_transports", 00:09:22.095 "req_id": 1 00:09:22.095 } 00:09:22.095 Got JSON-RPC error response 00:09:22.095 response: 00:09:22.095 { 00:09:22.095 "code": -19, 00:09:22.095 "message": "No such device" 00:09:22.095 } 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:22.095 [2024-09-27 13:14:23.910208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.095 13:14:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:22.353 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.353 13:14:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:22.353 { 00:09:22.353 "subsystems": [ 00:09:22.353 { 00:09:22.353 "subsystem": "fsdev", 00:09:22.353 "config": [ 00:09:22.353 { 00:09:22.353 "method": "fsdev_set_opts", 00:09:22.353 "params": { 00:09:22.353 "fsdev_io_pool_size": 65535, 00:09:22.353 "fsdev_io_cache_size": 256 00:09:22.353 } 00:09:22.353 } 00:09:22.353 ] 00:09:22.353 }, 00:09:22.353 { 00:09:22.353 "subsystem": "keyring", 00:09:22.353 "config": [] 00:09:22.353 }, 00:09:22.353 { 00:09:22.353 "subsystem": "iobuf", 00:09:22.353 "config": [ 00:09:22.353 { 00:09:22.353 "method": "iobuf_set_options", 00:09:22.353 "params": { 00:09:22.353 "small_pool_count": 8192, 00:09:22.353 "large_pool_count": 1024, 00:09:22.353 "small_bufsize": 8192, 00:09:22.353 "large_bufsize": 135168 00:09:22.353 } 00:09:22.353 } 00:09:22.353 ] 00:09:22.353 }, 00:09:22.353 { 00:09:22.353 "subsystem": "sock", 00:09:22.353 "config": [ 00:09:22.353 { 00:09:22.353 "method": "sock_set_default_impl", 00:09:22.353 "params": { 00:09:22.353 "impl_name": "uring" 00:09:22.353 } 00:09:22.353 }, 00:09:22.353 { 00:09:22.353 "method": "sock_impl_set_options", 00:09:22.353 "params": { 00:09:22.353 "impl_name": "ssl", 00:09:22.353 "recv_buf_size": 4096, 00:09:22.354 "send_buf_size": 4096, 00:09:22.354 "enable_recv_pipe": true, 00:09:22.354 "enable_quickack": false, 00:09:22.354 "enable_placement_id": 0, 00:09:22.354 "enable_zerocopy_send_server": true, 00:09:22.354 "enable_zerocopy_send_client": false, 00:09:22.354 "zerocopy_threshold": 0, 00:09:22.354 "tls_version": 0, 00:09:22.354 "enable_ktls": false 00:09:22.354 } 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "method": "sock_impl_set_options", 00:09:22.354 "params": { 00:09:22.354 "impl_name": "posix", 00:09:22.354 "recv_buf_size": 2097152, 00:09:22.354 "send_buf_size": 2097152, 00:09:22.354 "enable_recv_pipe": true, 00:09:22.354 "enable_quickack": false, 00:09:22.354 "enable_placement_id": 0, 00:09:22.354 "enable_zerocopy_send_server": true, 00:09:22.354 "enable_zerocopy_send_client": false, 00:09:22.354 "zerocopy_threshold": 0, 00:09:22.354 "tls_version": 0, 00:09:22.354 "enable_ktls": false 00:09:22.354 } 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "method": "sock_impl_set_options", 00:09:22.354 "params": { 00:09:22.354 "impl_name": "uring", 00:09:22.354 "recv_buf_size": 2097152, 00:09:22.354 "send_buf_size": 2097152, 00:09:22.354 "enable_recv_pipe": true, 00:09:22.354 "enable_quickack": false, 00:09:22.354 "enable_placement_id": 0, 00:09:22.354 "enable_zerocopy_send_server": false, 00:09:22.354 "enable_zerocopy_send_client": false, 00:09:22.354 "zerocopy_threshold": 0, 00:09:22.354 "tls_version": 0, 00:09:22.354 "enable_ktls": false 00:09:22.354 } 00:09:22.354 } 00:09:22.354 ] 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "subsystem": "vmd", 00:09:22.354 "config": [] 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "subsystem": "accel", 00:09:22.354 "config": [ 00:09:22.354 { 00:09:22.354 "method": "accel_set_options", 00:09:22.354 "params": { 00:09:22.354 "small_cache_size": 128, 00:09:22.354 "large_cache_size": 16, 00:09:22.354 "task_count": 2048, 00:09:22.354 "sequence_count": 2048, 00:09:22.354 "buf_count": 2048 00:09:22.354 } 00:09:22.354 } 00:09:22.354 ] 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "subsystem": "bdev", 00:09:22.354 "config": [ 00:09:22.354 { 00:09:22.354 "method": "bdev_set_options", 00:09:22.354 "params": { 00:09:22.354 "bdev_io_pool_size": 65535, 00:09:22.354 "bdev_io_cache_size": 256, 00:09:22.354 "bdev_auto_examine": true, 00:09:22.354 "iobuf_small_cache_size": 128, 00:09:22.354 "iobuf_large_cache_size": 16 00:09:22.354 } 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "method": "bdev_raid_set_options", 00:09:22.354 "params": { 00:09:22.354 "process_window_size_kb": 1024, 00:09:22.354 "process_max_bandwidth_mb_sec": 0 00:09:22.354 } 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "method": "bdev_iscsi_set_options", 00:09:22.354 "params": { 00:09:22.354 "timeout_sec": 30 00:09:22.354 } 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "method": "bdev_nvme_set_options", 00:09:22.354 "params": { 00:09:22.354 "action_on_timeout": "none", 00:09:22.354 "timeout_us": 0, 00:09:22.354 "timeout_admin_us": 0, 00:09:22.354 "keep_alive_timeout_ms": 10000, 00:09:22.354 "arbitration_burst": 0, 00:09:22.354 "low_priority_weight": 0, 00:09:22.354 "medium_priority_weight": 0, 00:09:22.354 "high_priority_weight": 0, 00:09:22.354 "nvme_adminq_poll_period_us": 10000, 00:09:22.354 "nvme_ioq_poll_period_us": 0, 00:09:22.354 "io_queue_requests": 0, 00:09:22.354 "delay_cmd_submit": true, 00:09:22.354 "transport_retry_count": 4, 00:09:22.354 "bdev_retry_count": 3, 00:09:22.354 "transport_ack_timeout": 0, 00:09:22.354 "ctrlr_loss_timeout_sec": 0, 00:09:22.354 "reconnect_delay_sec": 0, 00:09:22.354 "fast_io_fail_timeout_sec": 0, 00:09:22.354 "disable_auto_failback": false, 00:09:22.354 "generate_uuids": false, 00:09:22.354 "transport_tos": 0, 00:09:22.354 "nvme_error_stat": false, 00:09:22.354 "rdma_srq_size": 0, 00:09:22.354 "io_path_stat": false, 00:09:22.354 "allow_accel_sequence": false, 00:09:22.354 "rdma_max_cq_size": 0, 00:09:22.354 "rdma_cm_event_timeout_ms": 0, 00:09:22.354 "dhchap_digests": [ 00:09:22.354 "sha256", 00:09:22.354 "sha384", 00:09:22.354 "sha512" 00:09:22.354 ], 00:09:22.354 "dhchap_dhgroups": [ 00:09:22.354 "null", 00:09:22.354 "ffdhe2048", 00:09:22.354 "ffdhe3072", 00:09:22.354 "ffdhe4096", 00:09:22.354 "ffdhe6144", 00:09:22.354 "ffdhe8192" 00:09:22.354 ] 00:09:22.354 } 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "method": "bdev_nvme_set_hotplug", 00:09:22.354 "params": { 00:09:22.354 "period_us": 100000, 00:09:22.354 "enable": false 00:09:22.354 } 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "method": "bdev_wait_for_examine" 00:09:22.354 } 00:09:22.354 ] 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "subsystem": "scsi", 00:09:22.354 "config": null 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "subsystem": "scheduler", 00:09:22.354 "config": [ 00:09:22.354 { 00:09:22.354 "method": "framework_set_scheduler", 00:09:22.354 "params": { 00:09:22.354 "name": "static" 00:09:22.354 } 00:09:22.354 } 00:09:22.354 ] 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "subsystem": "vhost_scsi", 00:09:22.354 "config": [] 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "subsystem": "vhost_blk", 00:09:22.354 "config": [] 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "subsystem": "ublk", 00:09:22.354 "config": [] 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "subsystem": "nbd", 00:09:22.354 "config": [] 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "subsystem": "nvmf", 00:09:22.354 "config": [ 00:09:22.354 { 00:09:22.354 "method": "nvmf_set_config", 00:09:22.354 "params": { 00:09:22.354 "discovery_filter": "match_any", 00:09:22.354 "admin_cmd_passthru": { 00:09:22.354 "identify_ctrlr": false 00:09:22.354 }, 00:09:22.354 "dhchap_digests": [ 00:09:22.354 "sha256", 00:09:22.354 "sha384", 00:09:22.354 "sha512" 00:09:22.354 ], 00:09:22.354 "dhchap_dhgroups": [ 00:09:22.354 "null", 00:09:22.354 "ffdhe2048", 00:09:22.354 "ffdhe3072", 00:09:22.354 "ffdhe4096", 00:09:22.354 "ffdhe6144", 00:09:22.354 "ffdhe8192" 00:09:22.354 ] 00:09:22.354 } 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "method": "nvmf_set_max_subsystems", 00:09:22.354 "params": { 00:09:22.354 "max_subsystems": 1024 00:09:22.354 } 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "method": "nvmf_set_crdt", 00:09:22.354 "params": { 00:09:22.354 "crdt1": 0, 00:09:22.354 "crdt2": 0, 00:09:22.354 "crdt3": 0 00:09:22.354 } 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "method": "nvmf_create_transport", 00:09:22.354 "params": { 00:09:22.354 "trtype": "TCP", 00:09:22.354 "max_queue_depth": 128, 00:09:22.354 "max_io_qpairs_per_ctrlr": 127, 00:09:22.354 "in_capsule_data_size": 4096, 00:09:22.354 "max_io_size": 131072, 00:09:22.354 "io_unit_size": 131072, 00:09:22.354 "max_aq_depth": 128, 00:09:22.354 "num_shared_buffers": 511, 00:09:22.354 "buf_cache_size": 4294967295, 00:09:22.354 "dif_insert_or_strip": false, 00:09:22.354 "zcopy": false, 00:09:22.354 "c2h_success": true, 00:09:22.354 "sock_priority": 0, 00:09:22.354 "abort_timeout_sec": 1, 00:09:22.354 "ack_timeout": 0, 00:09:22.354 "data_wr_pool_size": 0 00:09:22.354 } 00:09:22.354 } 00:09:22.354 ] 00:09:22.354 }, 00:09:22.354 { 00:09:22.354 "subsystem": "iscsi", 00:09:22.354 "config": [ 00:09:22.354 { 00:09:22.354 "method": "iscsi_set_options", 00:09:22.354 "params": { 00:09:22.354 "node_base": "iqn.2016-06.io.spdk", 00:09:22.354 "max_sessions": 128, 00:09:22.354 "max_connections_per_session": 2, 00:09:22.354 "max_queue_depth": 64, 00:09:22.354 "default_time2wait": 2, 00:09:22.354 "default_time2retain": 20, 00:09:22.354 "first_burst_length": 8192, 00:09:22.354 "immediate_data": true, 00:09:22.354 "allow_duplicated_isid": false, 00:09:22.354 "error_recovery_level": 0, 00:09:22.354 "nop_timeout": 60, 00:09:22.354 "nop_in_interval": 30, 00:09:22.354 "disable_chap": false, 00:09:22.354 "require_chap": false, 00:09:22.354 "mutual_chap": false, 00:09:22.354 "chap_group": 0, 00:09:22.354 "max_large_datain_per_connection": 64, 00:09:22.354 "max_r2t_per_connection": 4, 00:09:22.354 "pdu_pool_size": 36864, 00:09:22.354 "immediate_data_pool_size": 16384, 00:09:22.354 "data_out_pool_size": 2048 00:09:22.354 } 00:09:22.354 } 00:09:22.354 ] 00:09:22.354 } 00:09:22.354 ] 00:09:22.354 } 00:09:22.354 13:14:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:22.354 13:14:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57015 00:09:22.354 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57015 ']' 00:09:22.354 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57015 00:09:22.354 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:09:22.354 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.355 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57015 00:09:22.355 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.355 killing process with pid 57015 00:09:22.355 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.355 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57015' 00:09:22.355 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57015 00:09:22.355 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57015 00:09:22.613 13:14:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57035 00:09:22.613 13:14:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:22.613 13:14:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57035 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57035 ']' 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57035 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57035 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.890 killing process with pid 57035 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57035' 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57035 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57035 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:27.890 00:09:27.890 real 0m6.241s 00:09:27.890 user 0m5.955s 00:09:27.890 sys 0m0.450s 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.890 ************************************ 00:09:27.890 END TEST skip_rpc_with_json 00:09:27.890 ************************************ 00:09:27.890 13:14:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:28.148 13:14:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:28.148 13:14:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:28.148 13:14:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.148 13:14:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.148 ************************************ 00:09:28.148 START TEST skip_rpc_with_delay 00:09:28.148 ************************************ 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:28.148 [2024-09-27 13:14:29.810704] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:28.148 [2024-09-27 13:14:29.811356] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:28.148 00:09:28.148 real 0m0.080s 00:09:28.148 user 0m0.058s 00:09:28.148 sys 0m0.020s 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.148 13:14:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:28.148 ************************************ 00:09:28.148 END TEST skip_rpc_with_delay 00:09:28.148 ************************************ 00:09:28.148 13:14:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:28.149 13:14:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:28.149 13:14:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:28.149 13:14:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:28.149 13:14:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.149 13:14:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.149 ************************************ 00:09:28.149 START TEST exit_on_failed_rpc_init 00:09:28.149 ************************************ 00:09:28.149 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:09:28.149 13:14:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57145 00:09:28.149 13:14:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57145 00:09:28.149 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57145 ']' 00:09:28.149 13:14:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:28.149 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.149 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.149 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.149 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.149 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:28.149 [2024-09-27 13:14:29.940546] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:28.149 [2024-09-27 13:14:29.940626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57145 ] 00:09:28.408 [2024-09-27 13:14:30.077924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.408 [2024-09-27 13:14:30.137445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.408 [2024-09-27 13:14:30.178210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:28.666 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:28.666 [2024-09-27 13:14:30.371905] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:28.666 [2024-09-27 13:14:30.371994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57154 ] 00:09:28.666 [2024-09-27 13:14:30.511099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.924 [2024-09-27 13:14:30.578640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.924 [2024-09-27 13:14:30.578753] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:28.924 [2024-09-27 13:14:30.578772] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:28.924 [2024-09-27 13:14:30.578782] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57145 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57145 ']' 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57145 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57145 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:28.924 killing process with pid 57145 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57145' 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57145 00:09:28.924 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57145 00:09:29.183 00:09:29.183 real 0m1.093s 00:09:29.183 user 0m1.307s 00:09:29.183 sys 0m0.278s 00:09:29.183 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.183 13:14:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:29.183 ************************************ 00:09:29.183 END TEST exit_on_failed_rpc_init 00:09:29.183 ************************************ 00:09:29.183 13:14:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:29.183 00:09:29.183 real 0m13.079s 00:09:29.183 user 0m12.511s 00:09:29.183 sys 0m1.135s 00:09:29.183 13:14:31 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.183 13:14:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.183 ************************************ 00:09:29.183 END TEST skip_rpc 00:09:29.183 ************************************ 00:09:29.442 13:14:31 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:29.442 13:14:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:29.442 13:14:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.442 13:14:31 -- common/autotest_common.sh@10 -- # set +x 00:09:29.442 ************************************ 00:09:29.442 START TEST rpc_client 00:09:29.442 ************************************ 00:09:29.442 13:14:31 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:29.442 * Looking for test storage... 00:09:29.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:29.442 13:14:31 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:29.442 13:14:31 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:29.442 13:14:31 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:09:29.442 13:14:31 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.442 13:14:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:29.442 13:14:31 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.442 13:14:31 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:29.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.442 --rc genhtml_branch_coverage=1 00:09:29.442 --rc genhtml_function_coverage=1 00:09:29.442 --rc genhtml_legend=1 00:09:29.442 --rc geninfo_all_blocks=1 00:09:29.442 --rc geninfo_unexecuted_blocks=1 00:09:29.442 00:09:29.442 ' 00:09:29.442 13:14:31 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:29.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.442 --rc genhtml_branch_coverage=1 00:09:29.442 --rc genhtml_function_coverage=1 00:09:29.442 --rc genhtml_legend=1 00:09:29.442 --rc geninfo_all_blocks=1 00:09:29.442 --rc geninfo_unexecuted_blocks=1 00:09:29.442 00:09:29.442 ' 00:09:29.442 13:14:31 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:29.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.442 --rc genhtml_branch_coverage=1 00:09:29.442 --rc genhtml_function_coverage=1 00:09:29.442 --rc genhtml_legend=1 00:09:29.442 --rc geninfo_all_blocks=1 00:09:29.442 --rc geninfo_unexecuted_blocks=1 00:09:29.442 00:09:29.442 ' 00:09:29.442 13:14:31 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:29.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.442 --rc genhtml_branch_coverage=1 00:09:29.442 --rc genhtml_function_coverage=1 00:09:29.442 --rc genhtml_legend=1 00:09:29.442 --rc geninfo_all_blocks=1 00:09:29.442 --rc geninfo_unexecuted_blocks=1 00:09:29.442 00:09:29.442 ' 00:09:29.442 13:14:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:29.442 OK 00:09:29.442 13:14:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:29.442 00:09:29.442 real 0m0.214s 00:09:29.442 user 0m0.141s 00:09:29.442 sys 0m0.076s 00:09:29.442 13:14:31 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.442 13:14:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:29.442 ************************************ 00:09:29.442 END TEST rpc_client 00:09:29.442 ************************************ 00:09:29.702 13:14:31 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:29.702 13:14:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:29.702 13:14:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.702 13:14:31 -- common/autotest_common.sh@10 -- # set +x 00:09:29.702 ************************************ 00:09:29.702 START TEST json_config 00:09:29.702 ************************************ 00:09:29.702 13:14:31 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:29.702 13:14:31 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:29.702 13:14:31 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:09:29.702 13:14:31 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:29.702 13:14:31 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:29.702 13:14:31 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.702 13:14:31 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.702 13:14:31 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.702 13:14:31 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.702 13:14:31 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.702 13:14:31 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.702 13:14:31 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.702 13:14:31 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.702 13:14:31 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.702 13:14:31 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.702 13:14:31 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.702 13:14:31 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:29.702 13:14:31 json_config -- scripts/common.sh@345 -- # : 1 00:09:29.702 13:14:31 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.702 13:14:31 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.702 13:14:31 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:29.702 13:14:31 json_config -- scripts/common.sh@353 -- # local d=1 00:09:29.702 13:14:31 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.702 13:14:31 json_config -- scripts/common.sh@355 -- # echo 1 00:09:29.702 13:14:31 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.702 13:14:31 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:29.702 13:14:31 json_config -- scripts/common.sh@353 -- # local d=2 00:09:29.702 13:14:31 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.702 13:14:31 json_config -- scripts/common.sh@355 -- # echo 2 00:09:29.702 13:14:31 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.702 13:14:31 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.702 13:14:31 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.702 13:14:31 json_config -- scripts/common.sh@368 -- # return 0 00:09:29.702 13:14:31 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.702 13:14:31 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:29.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.702 --rc genhtml_branch_coverage=1 00:09:29.702 --rc genhtml_function_coverage=1 00:09:29.702 --rc genhtml_legend=1 00:09:29.702 --rc geninfo_all_blocks=1 00:09:29.702 --rc geninfo_unexecuted_blocks=1 00:09:29.702 00:09:29.702 ' 00:09:29.702 13:14:31 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:29.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.702 --rc genhtml_branch_coverage=1 00:09:29.702 --rc genhtml_function_coverage=1 00:09:29.702 --rc genhtml_legend=1 00:09:29.702 --rc geninfo_all_blocks=1 00:09:29.702 --rc geninfo_unexecuted_blocks=1 00:09:29.702 00:09:29.702 ' 00:09:29.702 13:14:31 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:29.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.702 --rc genhtml_branch_coverage=1 00:09:29.702 --rc genhtml_function_coverage=1 00:09:29.702 --rc genhtml_legend=1 00:09:29.702 --rc geninfo_all_blocks=1 00:09:29.702 --rc geninfo_unexecuted_blocks=1 00:09:29.702 00:09:29.702 ' 00:09:29.702 13:14:31 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:29.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.702 --rc genhtml_branch_coverage=1 00:09:29.702 --rc genhtml_function_coverage=1 00:09:29.702 --rc genhtml_legend=1 00:09:29.702 --rc geninfo_all_blocks=1 00:09:29.702 --rc geninfo_unexecuted_blocks=1 00:09:29.702 00:09:29.702 ' 00:09:29.702 13:14:31 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.702 13:14:31 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.702 13:14:31 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.702 13:14:31 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.702 13:14:31 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.703 13:14:31 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.703 13:14:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.703 13:14:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.703 13:14:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.703 13:14:31 json_config -- paths/export.sh@5 -- # export PATH 00:09:29.703 13:14:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.703 13:14:31 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:09:29.703 13:14:31 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:29.703 13:14:31 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:29.703 13:14:31 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:29.703 13:14:31 json_config -- nvmf/common.sh@50 -- # : 0 00:09:29.703 13:14:31 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:29.703 13:14:31 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:29.703 13:14:31 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:29.703 13:14:31 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.703 13:14:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.703 13:14:31 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:29.703 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:29.703 13:14:31 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:29.703 13:14:31 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:29.703 13:14:31 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:29.703 INFO: JSON configuration test init 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:09:29.703 13:14:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.703 13:14:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:09:29.703 13:14:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.703 13:14:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:29.703 13:14:31 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:09:29.703 13:14:31 json_config -- json_config/common.sh@9 -- # local app=target 00:09:29.703 13:14:31 json_config -- json_config/common.sh@10 -- # shift 00:09:29.703 13:14:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:29.703 13:14:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:29.703 13:14:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:29.703 13:14:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:29.703 13:14:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:29.703 13:14:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57289 00:09:29.703 Waiting for target to run... 00:09:29.703 13:14:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:29.703 13:14:31 json_config -- json_config/common.sh@25 -- # waitforlisten 57289 /var/tmp/spdk_tgt.sock 00:09:29.703 13:14:31 json_config -- common/autotest_common.sh@831 -- # '[' -z 57289 ']' 00:09:29.703 13:14:31 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:29.703 13:14:31 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:29.703 13:14:31 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:29.703 13:14:31 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:29.703 13:14:31 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.703 13:14:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:29.961 [2024-09-27 13:14:31.583146] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:29.961 [2024-09-27 13:14:31.583248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57289 ] 00:09:30.220 [2024-09-27 13:14:31.885607] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.220 [2024-09-27 13:14:31.937046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.787 13:14:32 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.787 13:14:32 json_config -- common/autotest_common.sh@864 -- # return 0 00:09:30.787 00:09:30.787 13:14:32 json_config -- json_config/common.sh@26 -- # echo '' 00:09:30.787 13:14:32 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:09:30.787 13:14:32 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:09:30.787 13:14:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:30.787 13:14:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:30.787 13:14:32 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:09:30.787 13:14:32 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:09:30.787 13:14:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:30.787 13:14:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:31.045 13:14:32 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:31.045 13:14:32 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:09:31.045 13:14:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:31.303 [2024-09-27 13:14:32.955360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.303 13:14:33 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:09:31.303 13:14:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:31.303 13:14:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.303 13:14:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:31.303 13:14:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:31.303 13:14:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:31.303 13:14:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:31.303 13:14:33 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:09:31.303 13:14:33 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:09:31.303 13:14:33 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:31.303 13:14:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:31.303 13:14:33 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@51 -- # local get_types 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@54 -- # sort 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:09:31.869 13:14:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:31.869 13:14:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@62 -- # return 0 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:09:31.869 13:14:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.869 13:14:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:09:31.869 13:14:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:31.869 13:14:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:32.127 MallocForNvmf0 00:09:32.127 13:14:33 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:32.127 13:14:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:32.385 MallocForNvmf1 00:09:32.385 13:14:34 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:32.385 13:14:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:32.644 [2024-09-27 13:14:34.334669] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.644 13:14:34 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:32.644 13:14:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:32.912 13:14:34 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:32.912 13:14:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:33.170 13:14:34 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:33.170 13:14:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:33.427 13:14:35 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:33.427 13:14:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:33.686 [2024-09-27 13:14:35.391291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:33.686 13:14:35 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:09:33.686 13:14:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.686 13:14:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:33.686 13:14:35 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:09:33.686 13:14:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.686 13:14:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:33.686 13:14:35 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:09:33.686 13:14:35 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:33.686 13:14:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:33.944 MallocBdevForConfigChangeCheck 00:09:33.944 13:14:35 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:09:33.944 13:14:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.944 13:14:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:34.202 13:14:35 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:09:34.202 13:14:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:34.460 INFO: shutting down applications... 00:09:34.460 13:14:36 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:09:34.460 13:14:36 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:09:34.460 13:14:36 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:09:34.460 13:14:36 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:09:34.460 13:14:36 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:34.718 Calling clear_iscsi_subsystem 00:09:34.718 Calling clear_nvmf_subsystem 00:09:34.718 Calling clear_nbd_subsystem 00:09:34.718 Calling clear_ublk_subsystem 00:09:34.718 Calling clear_vhost_blk_subsystem 00:09:34.718 Calling clear_vhost_scsi_subsystem 00:09:34.718 Calling clear_bdev_subsystem 00:09:34.718 13:14:36 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:34.718 13:14:36 json_config -- json_config/json_config.sh@350 -- # count=100 00:09:34.718 13:14:36 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:09:34.718 13:14:36 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:34.718 13:14:36 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:34.718 13:14:36 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:35.285 13:14:36 json_config -- json_config/json_config.sh@352 -- # break 00:09:35.285 13:14:36 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:09:35.285 13:14:36 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:09:35.285 13:14:36 json_config -- json_config/common.sh@31 -- # local app=target 00:09:35.285 13:14:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:35.285 13:14:36 json_config -- json_config/common.sh@35 -- # [[ -n 57289 ]] 00:09:35.286 13:14:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57289 00:09:35.286 13:14:36 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:35.286 13:14:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:35.286 13:14:36 json_config -- json_config/common.sh@41 -- # kill -0 57289 00:09:35.286 13:14:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:35.854 13:14:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:35.854 13:14:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:35.854 13:14:37 json_config -- json_config/common.sh@41 -- # kill -0 57289 00:09:35.854 13:14:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:35.854 13:14:37 json_config -- json_config/common.sh@43 -- # break 00:09:35.854 13:14:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:35.854 13:14:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:35.854 SPDK target shutdown done 00:09:35.854 13:14:37 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:09:35.854 INFO: relaunching applications... 00:09:35.854 13:14:37 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:35.854 13:14:37 json_config -- json_config/common.sh@9 -- # local app=target 00:09:35.854 13:14:37 json_config -- json_config/common.sh@10 -- # shift 00:09:35.854 13:14:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:35.854 13:14:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:35.854 13:14:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:35.854 Waiting for target to run... 00:09:35.854 13:14:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:35.854 13:14:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:35.854 13:14:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57485 00:09:35.854 13:14:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:35.854 13:14:37 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:35.854 13:14:37 json_config -- json_config/common.sh@25 -- # waitforlisten 57485 /var/tmp/spdk_tgt.sock 00:09:35.854 13:14:37 json_config -- common/autotest_common.sh@831 -- # '[' -z 57485 ']' 00:09:35.854 13:14:37 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:35.854 13:14:37 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.854 13:14:37 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:35.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:35.854 13:14:37 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.854 13:14:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:35.854 [2024-09-27 13:14:37.567745] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:35.854 [2024-09-27 13:14:37.567961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57485 ] 00:09:36.115 [2024-09-27 13:14:37.861845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.115 [2024-09-27 13:14:37.911109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.374 [2024-09-27 13:14:38.043389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.633 [2024-09-27 13:14:38.249043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.633 [2024-09-27 13:14:38.281122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:36.892 13:14:38 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.892 00:09:36.892 INFO: Checking if target configuration is the same... 00:09:36.892 13:14:38 json_config -- common/autotest_common.sh@864 -- # return 0 00:09:36.892 13:14:38 json_config -- json_config/common.sh@26 -- # echo '' 00:09:36.892 13:14:38 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:09:36.892 13:14:38 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:36.892 13:14:38 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:36.892 13:14:38 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:09:36.892 13:14:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:36.892 + '[' 2 -ne 2 ']' 00:09:36.892 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:36.892 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:36.892 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:36.892 +++ basename /dev/fd/62 00:09:36.892 ++ mktemp /tmp/62.XXX 00:09:36.892 + tmp_file_1=/tmp/62.JgV 00:09:36.892 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:36.892 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:36.892 + tmp_file_2=/tmp/spdk_tgt_config.json.2cz 00:09:36.892 + ret=0 00:09:36.892 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:37.150 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:37.409 + diff -u /tmp/62.JgV /tmp/spdk_tgt_config.json.2cz 00:09:37.409 INFO: JSON config files are the same 00:09:37.409 + echo 'INFO: JSON config files are the same' 00:09:37.409 + rm /tmp/62.JgV /tmp/spdk_tgt_config.json.2cz 00:09:37.409 + exit 0 00:09:37.410 INFO: changing configuration and checking if this can be detected... 00:09:37.410 13:14:39 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:09:37.410 13:14:39 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:37.410 13:14:39 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:37.410 13:14:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:37.668 13:14:39 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:37.668 13:14:39 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:09:37.668 13:14:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:37.668 + '[' 2 -ne 2 ']' 00:09:37.668 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:37.668 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:37.668 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:37.668 +++ basename /dev/fd/62 00:09:37.668 ++ mktemp /tmp/62.XXX 00:09:37.668 + tmp_file_1=/tmp/62.AOA 00:09:37.668 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:37.668 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:37.668 + tmp_file_2=/tmp/spdk_tgt_config.json.ru4 00:09:37.668 + ret=0 00:09:37.668 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:37.927 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:38.185 + diff -u /tmp/62.AOA /tmp/spdk_tgt_config.json.ru4 00:09:38.185 + ret=1 00:09:38.185 + echo '=== Start of file: /tmp/62.AOA ===' 00:09:38.185 + cat /tmp/62.AOA 00:09:38.185 + echo '=== End of file: /tmp/62.AOA ===' 00:09:38.185 + echo '' 00:09:38.185 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ru4 ===' 00:09:38.185 + cat /tmp/spdk_tgt_config.json.ru4 00:09:38.185 + echo '=== End of file: /tmp/spdk_tgt_config.json.ru4 ===' 00:09:38.185 + echo '' 00:09:38.185 + rm /tmp/62.AOA /tmp/spdk_tgt_config.json.ru4 00:09:38.185 + exit 1 00:09:38.185 INFO: configuration change detected. 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@324 -- # [[ -n 57485 ]] 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@200 -- # uname -s 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:38.185 13:14:39 json_config -- json_config/json_config.sh@330 -- # killprocess 57485 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@950 -- # '[' -z 57485 ']' 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@954 -- # kill -0 57485 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@955 -- # uname 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57485 00:09:38.185 killing process with pid 57485 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57485' 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@969 -- # kill 57485 00:09:38.185 13:14:39 json_config -- common/autotest_common.sh@974 -- # wait 57485 00:09:38.444 13:14:40 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:38.444 13:14:40 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:09:38.444 13:14:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:38.444 13:14:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:38.444 INFO: Success 00:09:38.444 13:14:40 json_config -- json_config/json_config.sh@335 -- # return 0 00:09:38.444 13:14:40 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:09:38.444 ************************************ 00:09:38.444 END TEST json_config 00:09:38.444 ************************************ 00:09:38.444 00:09:38.444 real 0m8.849s 00:09:38.444 user 0m13.153s 00:09:38.444 sys 0m1.398s 00:09:38.444 13:14:40 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.444 13:14:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:38.444 13:14:40 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:38.444 13:14:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:38.444 13:14:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.444 13:14:40 -- common/autotest_common.sh@10 -- # set +x 00:09:38.444 ************************************ 00:09:38.444 START TEST json_config_extra_key 00:09:38.444 ************************************ 00:09:38.444 13:14:40 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:38.444 13:14:40 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:38.444 13:14:40 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:09:38.444 13:14:40 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:38.703 13:14:40 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.703 13:14:40 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:38.703 13:14:40 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.703 13:14:40 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:38.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.703 --rc genhtml_branch_coverage=1 00:09:38.703 --rc genhtml_function_coverage=1 00:09:38.703 --rc genhtml_legend=1 00:09:38.703 --rc geninfo_all_blocks=1 00:09:38.703 --rc geninfo_unexecuted_blocks=1 00:09:38.703 00:09:38.703 ' 00:09:38.703 13:14:40 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:38.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.703 --rc genhtml_branch_coverage=1 00:09:38.703 --rc genhtml_function_coverage=1 00:09:38.703 --rc genhtml_legend=1 00:09:38.703 --rc geninfo_all_blocks=1 00:09:38.703 --rc geninfo_unexecuted_blocks=1 00:09:38.703 00:09:38.703 ' 00:09:38.703 13:14:40 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:38.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.704 --rc genhtml_branch_coverage=1 00:09:38.704 --rc genhtml_function_coverage=1 00:09:38.704 --rc genhtml_legend=1 00:09:38.704 --rc geninfo_all_blocks=1 00:09:38.704 --rc geninfo_unexecuted_blocks=1 00:09:38.704 00:09:38.704 ' 00:09:38.704 13:14:40 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:38.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.704 --rc genhtml_branch_coverage=1 00:09:38.704 --rc genhtml_function_coverage=1 00:09:38.704 --rc genhtml_legend=1 00:09:38.704 --rc geninfo_all_blocks=1 00:09:38.704 --rc geninfo_unexecuted_blocks=1 00:09:38.704 00:09:38.704 ' 00:09:38.704 13:14:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:38.704 13:14:40 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.704 13:14:40 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.704 13:14:40 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.704 13:14:40 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.704 13:14:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.704 13:14:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.704 13:14:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.704 13:14:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:38.704 13:14:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:38.704 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:38.704 13:14:40 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:38.704 13:14:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:38.704 13:14:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:38.704 13:14:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:38.704 13:14:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:38.704 INFO: launching applications... 00:09:38.704 13:14:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:38.704 13:14:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:38.704 13:14:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:38.704 13:14:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:38.704 13:14:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:38.704 13:14:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:38.704 13:14:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:38.704 13:14:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:38.704 13:14:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:38.704 13:14:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:38.704 13:14:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:38.704 13:14:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:38.704 13:14:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:38.704 13:14:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:38.704 13:14:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:38.704 13:14:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57639 00:09:38.704 13:14:40 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:38.704 Waiting for target to run... 00:09:38.704 13:14:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:38.704 13:14:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57639 /var/tmp/spdk_tgt.sock 00:09:38.704 13:14:40 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57639 ']' 00:09:38.704 13:14:40 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:38.704 13:14:40 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.704 13:14:40 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:38.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:38.704 13:14:40 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.704 13:14:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:38.704 [2024-09-27 13:14:40.492463] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:38.704 [2024-09-27 13:14:40.492793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57639 ] 00:09:38.968 [2024-09-27 13:14:40.792065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.242 [2024-09-27 13:14:40.857923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.242 [2024-09-27 13:14:40.884652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.810 00:09:39.810 INFO: shutting down applications... 00:09:39.810 13:14:41 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.810 13:14:41 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:09:39.810 13:14:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:39.810 13:14:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:39.810 13:14:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:39.810 13:14:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:39.810 13:14:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:39.810 13:14:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57639 ]] 00:09:39.810 13:14:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57639 00:09:39.810 13:14:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:39.810 13:14:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:39.810 13:14:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57639 00:09:39.810 13:14:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:40.377 13:14:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:40.377 13:14:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:40.377 13:14:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57639 00:09:40.377 13:14:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:40.377 SPDK target shutdown done 00:09:40.377 Success 00:09:40.377 13:14:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:40.377 13:14:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:40.377 13:14:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:40.377 13:14:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:40.377 ************************************ 00:09:40.377 END TEST json_config_extra_key 00:09:40.377 ************************************ 00:09:40.377 00:09:40.377 real 0m1.773s 00:09:40.377 user 0m1.667s 00:09:40.377 sys 0m0.340s 00:09:40.377 13:14:41 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.377 13:14:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:40.377 13:14:42 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:40.377 13:14:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:40.377 13:14:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.377 13:14:42 -- common/autotest_common.sh@10 -- # set +x 00:09:40.377 ************************************ 00:09:40.377 START TEST alias_rpc 00:09:40.377 ************************************ 00:09:40.377 13:14:42 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:40.377 * Looking for test storage... 00:09:40.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:40.377 13:14:42 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:40.377 13:14:42 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:40.377 13:14:42 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:40.377 13:14:42 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:40.377 13:14:42 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.377 13:14:42 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.377 13:14:42 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.377 13:14:42 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.377 13:14:42 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.377 13:14:42 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.377 13:14:42 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.377 13:14:42 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.377 13:14:42 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.377 13:14:42 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.378 13:14:42 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:40.636 13:14:42 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.636 13:14:42 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.636 13:14:42 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.636 13:14:42 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:40.636 13:14:42 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.636 13:14:42 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:40.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.636 --rc genhtml_branch_coverage=1 00:09:40.636 --rc genhtml_function_coverage=1 00:09:40.636 --rc genhtml_legend=1 00:09:40.636 --rc geninfo_all_blocks=1 00:09:40.636 --rc geninfo_unexecuted_blocks=1 00:09:40.636 00:09:40.636 ' 00:09:40.636 13:14:42 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:40.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.636 --rc genhtml_branch_coverage=1 00:09:40.636 --rc genhtml_function_coverage=1 00:09:40.636 --rc genhtml_legend=1 00:09:40.636 --rc geninfo_all_blocks=1 00:09:40.636 --rc geninfo_unexecuted_blocks=1 00:09:40.636 00:09:40.636 ' 00:09:40.636 13:14:42 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:40.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.636 --rc genhtml_branch_coverage=1 00:09:40.636 --rc genhtml_function_coverage=1 00:09:40.636 --rc genhtml_legend=1 00:09:40.636 --rc geninfo_all_blocks=1 00:09:40.636 --rc geninfo_unexecuted_blocks=1 00:09:40.636 00:09:40.636 ' 00:09:40.636 13:14:42 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:40.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.636 --rc genhtml_branch_coverage=1 00:09:40.636 --rc genhtml_function_coverage=1 00:09:40.636 --rc genhtml_legend=1 00:09:40.636 --rc geninfo_all_blocks=1 00:09:40.636 --rc geninfo_unexecuted_blocks=1 00:09:40.636 00:09:40.636 ' 00:09:40.636 13:14:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:40.636 13:14:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57717 00:09:40.636 13:14:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:40.636 13:14:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57717 00:09:40.636 13:14:42 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57717 ']' 00:09:40.636 13:14:42 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.636 13:14:42 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.636 13:14:42 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.636 13:14:42 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.636 13:14:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.636 [2024-09-27 13:14:42.292926] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:40.636 [2024-09-27 13:14:42.293211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57717 ] 00:09:40.637 [2024-09-27 13:14:42.432531] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.895 [2024-09-27 13:14:42.493547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.895 [2024-09-27 13:14:42.535259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.895 13:14:42 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.895 13:14:42 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:40.895 13:14:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:41.153 13:14:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57717 00:09:41.153 13:14:42 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57717 ']' 00:09:41.153 13:14:42 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57717 00:09:41.153 13:14:42 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:09:41.153 13:14:42 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:41.153 13:14:42 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57717 00:09:41.153 killing process with pid 57717 00:09:41.153 13:14:42 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:41.153 13:14:42 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:41.153 13:14:42 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57717' 00:09:41.153 13:14:42 alias_rpc -- common/autotest_common.sh@969 -- # kill 57717 00:09:41.153 13:14:42 alias_rpc -- common/autotest_common.sh@974 -- # wait 57717 00:09:41.721 ************************************ 00:09:41.721 END TEST alias_rpc 00:09:41.721 ************************************ 00:09:41.721 00:09:41.721 real 0m1.224s 00:09:41.721 user 0m1.412s 00:09:41.721 sys 0m0.314s 00:09:41.721 13:14:43 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.721 13:14:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.721 13:14:43 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:41.721 13:14:43 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:41.721 13:14:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:41.721 13:14:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.721 13:14:43 -- common/autotest_common.sh@10 -- # set +x 00:09:41.721 ************************************ 00:09:41.721 START TEST spdkcli_tcp 00:09:41.721 ************************************ 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:41.721 * Looking for test storage... 00:09:41.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.721 13:14:43 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:41.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.721 --rc genhtml_branch_coverage=1 00:09:41.721 --rc genhtml_function_coverage=1 00:09:41.721 --rc genhtml_legend=1 00:09:41.721 --rc geninfo_all_blocks=1 00:09:41.721 --rc geninfo_unexecuted_blocks=1 00:09:41.721 00:09:41.721 ' 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:41.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.721 --rc genhtml_branch_coverage=1 00:09:41.721 --rc genhtml_function_coverage=1 00:09:41.721 --rc genhtml_legend=1 00:09:41.721 --rc geninfo_all_blocks=1 00:09:41.721 --rc geninfo_unexecuted_blocks=1 00:09:41.721 00:09:41.721 ' 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:41.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.721 --rc genhtml_branch_coverage=1 00:09:41.721 --rc genhtml_function_coverage=1 00:09:41.721 --rc genhtml_legend=1 00:09:41.721 --rc geninfo_all_blocks=1 00:09:41.721 --rc geninfo_unexecuted_blocks=1 00:09:41.721 00:09:41.721 ' 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:41.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.721 --rc genhtml_branch_coverage=1 00:09:41.721 --rc genhtml_function_coverage=1 00:09:41.721 --rc genhtml_legend=1 00:09:41.721 --rc geninfo_all_blocks=1 00:09:41.721 --rc geninfo_unexecuted_blocks=1 00:09:41.721 00:09:41.721 ' 00:09:41.721 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:41.721 13:14:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:41.721 13:14:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:41.721 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:41.721 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:41.721 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:41.721 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:41.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.721 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57788 00:09:41.721 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:41.721 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57788 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57788 ']' 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.721 13:14:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:41.980 [2024-09-27 13:14:43.574388] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:41.980 [2024-09-27 13:14:43.574510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57788 ] 00:09:41.980 [2024-09-27 13:14:43.717390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:41.980 [2024-09-27 13:14:43.776139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.980 [2024-09-27 13:14:43.776149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.980 [2024-09-27 13:14:43.826188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.289 13:14:43 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.289 13:14:43 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:09:42.289 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57797 00:09:42.289 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:42.289 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:42.549 [ 00:09:42.549 "bdev_malloc_delete", 00:09:42.549 "bdev_malloc_create", 00:09:42.549 "bdev_null_resize", 00:09:42.549 "bdev_null_delete", 00:09:42.549 "bdev_null_create", 00:09:42.549 "bdev_nvme_cuse_unregister", 00:09:42.549 "bdev_nvme_cuse_register", 00:09:42.549 "bdev_opal_new_user", 00:09:42.549 "bdev_opal_set_lock_state", 00:09:42.549 "bdev_opal_delete", 00:09:42.549 "bdev_opal_get_info", 00:09:42.549 "bdev_opal_create", 00:09:42.549 "bdev_nvme_opal_revert", 00:09:42.549 "bdev_nvme_opal_init", 00:09:42.549 "bdev_nvme_send_cmd", 00:09:42.549 "bdev_nvme_set_keys", 00:09:42.549 "bdev_nvme_get_path_iostat", 00:09:42.549 "bdev_nvme_get_mdns_discovery_info", 00:09:42.549 "bdev_nvme_stop_mdns_discovery", 00:09:42.549 "bdev_nvme_start_mdns_discovery", 00:09:42.549 "bdev_nvme_set_multipath_policy", 00:09:42.549 "bdev_nvme_set_preferred_path", 00:09:42.549 "bdev_nvme_get_io_paths", 00:09:42.549 "bdev_nvme_remove_error_injection", 00:09:42.549 "bdev_nvme_add_error_injection", 00:09:42.549 "bdev_nvme_get_discovery_info", 00:09:42.549 "bdev_nvme_stop_discovery", 00:09:42.549 "bdev_nvme_start_discovery", 00:09:42.549 "bdev_nvme_get_controller_health_info", 00:09:42.549 "bdev_nvme_disable_controller", 00:09:42.549 "bdev_nvme_enable_controller", 00:09:42.549 "bdev_nvme_reset_controller", 00:09:42.549 "bdev_nvme_get_transport_statistics", 00:09:42.549 "bdev_nvme_apply_firmware", 00:09:42.549 "bdev_nvme_detach_controller", 00:09:42.549 "bdev_nvme_get_controllers", 00:09:42.549 "bdev_nvme_attach_controller", 00:09:42.549 "bdev_nvme_set_hotplug", 00:09:42.549 "bdev_nvme_set_options", 00:09:42.549 "bdev_passthru_delete", 00:09:42.549 "bdev_passthru_create", 00:09:42.549 "bdev_lvol_set_parent_bdev", 00:09:42.549 "bdev_lvol_set_parent", 00:09:42.549 "bdev_lvol_check_shallow_copy", 00:09:42.549 "bdev_lvol_start_shallow_copy", 00:09:42.549 "bdev_lvol_grow_lvstore", 00:09:42.549 "bdev_lvol_get_lvols", 00:09:42.549 "bdev_lvol_get_lvstores", 00:09:42.549 "bdev_lvol_delete", 00:09:42.549 "bdev_lvol_set_read_only", 00:09:42.549 "bdev_lvol_resize", 00:09:42.549 "bdev_lvol_decouple_parent", 00:09:42.549 "bdev_lvol_inflate", 00:09:42.549 "bdev_lvol_rename", 00:09:42.549 "bdev_lvol_clone_bdev", 00:09:42.549 "bdev_lvol_clone", 00:09:42.549 "bdev_lvol_snapshot", 00:09:42.549 "bdev_lvol_create", 00:09:42.549 "bdev_lvol_delete_lvstore", 00:09:42.549 "bdev_lvol_rename_lvstore", 00:09:42.549 "bdev_lvol_create_lvstore", 00:09:42.549 "bdev_raid_set_options", 00:09:42.549 "bdev_raid_remove_base_bdev", 00:09:42.549 "bdev_raid_add_base_bdev", 00:09:42.549 "bdev_raid_delete", 00:09:42.549 "bdev_raid_create", 00:09:42.549 "bdev_raid_get_bdevs", 00:09:42.549 "bdev_error_inject_error", 00:09:42.549 "bdev_error_delete", 00:09:42.549 "bdev_error_create", 00:09:42.549 "bdev_split_delete", 00:09:42.549 "bdev_split_create", 00:09:42.549 "bdev_delay_delete", 00:09:42.549 "bdev_delay_create", 00:09:42.549 "bdev_delay_update_latency", 00:09:42.549 "bdev_zone_block_delete", 00:09:42.549 "bdev_zone_block_create", 00:09:42.549 "blobfs_create", 00:09:42.549 "blobfs_detect", 00:09:42.549 "blobfs_set_cache_size", 00:09:42.549 "bdev_aio_delete", 00:09:42.549 "bdev_aio_rescan", 00:09:42.549 "bdev_aio_create", 00:09:42.549 "bdev_ftl_set_property", 00:09:42.549 "bdev_ftl_get_properties", 00:09:42.549 "bdev_ftl_get_stats", 00:09:42.549 "bdev_ftl_unmap", 00:09:42.549 "bdev_ftl_unload", 00:09:42.549 "bdev_ftl_delete", 00:09:42.549 "bdev_ftl_load", 00:09:42.549 "bdev_ftl_create", 00:09:42.549 "bdev_virtio_attach_controller", 00:09:42.549 "bdev_virtio_scsi_get_devices", 00:09:42.549 "bdev_virtio_detach_controller", 00:09:42.549 "bdev_virtio_blk_set_hotplug", 00:09:42.549 "bdev_iscsi_delete", 00:09:42.549 "bdev_iscsi_create", 00:09:42.549 "bdev_iscsi_set_options", 00:09:42.549 "bdev_uring_delete", 00:09:42.549 "bdev_uring_rescan", 00:09:42.549 "bdev_uring_create", 00:09:42.549 "accel_error_inject_error", 00:09:42.549 "ioat_scan_accel_module", 00:09:42.549 "dsa_scan_accel_module", 00:09:42.549 "iaa_scan_accel_module", 00:09:42.549 "keyring_file_remove_key", 00:09:42.549 "keyring_file_add_key", 00:09:42.549 "keyring_linux_set_options", 00:09:42.549 "fsdev_aio_delete", 00:09:42.549 "fsdev_aio_create", 00:09:42.549 "iscsi_get_histogram", 00:09:42.549 "iscsi_enable_histogram", 00:09:42.549 "iscsi_set_options", 00:09:42.549 "iscsi_get_auth_groups", 00:09:42.549 "iscsi_auth_group_remove_secret", 00:09:42.549 "iscsi_auth_group_add_secret", 00:09:42.549 "iscsi_delete_auth_group", 00:09:42.549 "iscsi_create_auth_group", 00:09:42.549 "iscsi_set_discovery_auth", 00:09:42.549 "iscsi_get_options", 00:09:42.549 "iscsi_target_node_request_logout", 00:09:42.549 "iscsi_target_node_set_redirect", 00:09:42.549 "iscsi_target_node_set_auth", 00:09:42.549 "iscsi_target_node_add_lun", 00:09:42.549 "iscsi_get_stats", 00:09:42.549 "iscsi_get_connections", 00:09:42.549 "iscsi_portal_group_set_auth", 00:09:42.549 "iscsi_start_portal_group", 00:09:42.549 "iscsi_delete_portal_group", 00:09:42.549 "iscsi_create_portal_group", 00:09:42.549 "iscsi_get_portal_groups", 00:09:42.549 "iscsi_delete_target_node", 00:09:42.549 "iscsi_target_node_remove_pg_ig_maps", 00:09:42.549 "iscsi_target_node_add_pg_ig_maps", 00:09:42.549 "iscsi_create_target_node", 00:09:42.549 "iscsi_get_target_nodes", 00:09:42.549 "iscsi_delete_initiator_group", 00:09:42.549 "iscsi_initiator_group_remove_initiators", 00:09:42.549 "iscsi_initiator_group_add_initiators", 00:09:42.549 "iscsi_create_initiator_group", 00:09:42.549 "iscsi_get_initiator_groups", 00:09:42.549 "nvmf_set_crdt", 00:09:42.549 "nvmf_set_config", 00:09:42.549 "nvmf_set_max_subsystems", 00:09:42.549 "nvmf_stop_mdns_prr", 00:09:42.549 "nvmf_publish_mdns_prr", 00:09:42.549 "nvmf_subsystem_get_listeners", 00:09:42.549 "nvmf_subsystem_get_qpairs", 00:09:42.549 "nvmf_subsystem_get_controllers", 00:09:42.549 "nvmf_get_stats", 00:09:42.550 "nvmf_get_transports", 00:09:42.550 "nvmf_create_transport", 00:09:42.550 "nvmf_get_targets", 00:09:42.550 "nvmf_delete_target", 00:09:42.550 "nvmf_create_target", 00:09:42.550 "nvmf_subsystem_allow_any_host", 00:09:42.550 "nvmf_subsystem_set_keys", 00:09:42.550 "nvmf_subsystem_remove_host", 00:09:42.550 "nvmf_subsystem_add_host", 00:09:42.550 "nvmf_ns_remove_host", 00:09:42.550 "nvmf_ns_add_host", 00:09:42.550 "nvmf_subsystem_remove_ns", 00:09:42.550 "nvmf_subsystem_set_ns_ana_group", 00:09:42.550 "nvmf_subsystem_add_ns", 00:09:42.550 "nvmf_subsystem_listener_set_ana_state", 00:09:42.550 "nvmf_discovery_get_referrals", 00:09:42.550 "nvmf_discovery_remove_referral", 00:09:42.550 "nvmf_discovery_add_referral", 00:09:42.550 "nvmf_subsystem_remove_listener", 00:09:42.550 "nvmf_subsystem_add_listener", 00:09:42.550 "nvmf_delete_subsystem", 00:09:42.550 "nvmf_create_subsystem", 00:09:42.550 "nvmf_get_subsystems", 00:09:42.550 "env_dpdk_get_mem_stats", 00:09:42.550 "nbd_get_disks", 00:09:42.550 "nbd_stop_disk", 00:09:42.550 "nbd_start_disk", 00:09:42.550 "ublk_recover_disk", 00:09:42.550 "ublk_get_disks", 00:09:42.550 "ublk_stop_disk", 00:09:42.550 "ublk_start_disk", 00:09:42.550 "ublk_destroy_target", 00:09:42.550 "ublk_create_target", 00:09:42.550 "virtio_blk_create_transport", 00:09:42.550 "virtio_blk_get_transports", 00:09:42.550 "vhost_controller_set_coalescing", 00:09:42.550 "vhost_get_controllers", 00:09:42.550 "vhost_delete_controller", 00:09:42.550 "vhost_create_blk_controller", 00:09:42.550 "vhost_scsi_controller_remove_target", 00:09:42.550 "vhost_scsi_controller_add_target", 00:09:42.550 "vhost_start_scsi_controller", 00:09:42.550 "vhost_create_scsi_controller", 00:09:42.550 "thread_set_cpumask", 00:09:42.550 "scheduler_set_options", 00:09:42.550 "framework_get_governor", 00:09:42.550 "framework_get_scheduler", 00:09:42.550 "framework_set_scheduler", 00:09:42.550 "framework_get_reactors", 00:09:42.550 "thread_get_io_channels", 00:09:42.550 "thread_get_pollers", 00:09:42.550 "thread_get_stats", 00:09:42.550 "framework_monitor_context_switch", 00:09:42.550 "spdk_kill_instance", 00:09:42.550 "log_enable_timestamps", 00:09:42.550 "log_get_flags", 00:09:42.550 "log_clear_flag", 00:09:42.550 "log_set_flag", 00:09:42.550 "log_get_level", 00:09:42.550 "log_set_level", 00:09:42.550 "log_get_print_level", 00:09:42.550 "log_set_print_level", 00:09:42.550 "framework_enable_cpumask_locks", 00:09:42.550 "framework_disable_cpumask_locks", 00:09:42.550 "framework_wait_init", 00:09:42.550 "framework_start_init", 00:09:42.550 "scsi_get_devices", 00:09:42.550 "bdev_get_histogram", 00:09:42.550 "bdev_enable_histogram", 00:09:42.550 "bdev_set_qos_limit", 00:09:42.550 "bdev_set_qd_sampling_period", 00:09:42.550 "bdev_get_bdevs", 00:09:42.550 "bdev_reset_iostat", 00:09:42.550 "bdev_get_iostat", 00:09:42.550 "bdev_examine", 00:09:42.550 "bdev_wait_for_examine", 00:09:42.550 "bdev_set_options", 00:09:42.550 "accel_get_stats", 00:09:42.550 "accel_set_options", 00:09:42.550 "accel_set_driver", 00:09:42.550 "accel_crypto_key_destroy", 00:09:42.550 "accel_crypto_keys_get", 00:09:42.550 "accel_crypto_key_create", 00:09:42.550 "accel_assign_opc", 00:09:42.550 "accel_get_module_info", 00:09:42.550 "accel_get_opc_assignments", 00:09:42.550 "vmd_rescan", 00:09:42.550 "vmd_remove_device", 00:09:42.550 "vmd_enable", 00:09:42.550 "sock_get_default_impl", 00:09:42.550 "sock_set_default_impl", 00:09:42.550 "sock_impl_set_options", 00:09:42.550 "sock_impl_get_options", 00:09:42.550 "iobuf_get_stats", 00:09:42.550 "iobuf_set_options", 00:09:42.550 "keyring_get_keys", 00:09:42.550 "framework_get_pci_devices", 00:09:42.550 "framework_get_config", 00:09:42.550 "framework_get_subsystems", 00:09:42.550 "fsdev_set_opts", 00:09:42.550 "fsdev_get_opts", 00:09:42.550 "trace_get_info", 00:09:42.550 "trace_get_tpoint_group_mask", 00:09:42.550 "trace_disable_tpoint_group", 00:09:42.550 "trace_enable_tpoint_group", 00:09:42.550 "trace_clear_tpoint_mask", 00:09:42.550 "trace_set_tpoint_mask", 00:09:42.550 "notify_get_notifications", 00:09:42.550 "notify_get_types", 00:09:42.550 "spdk_get_version", 00:09:42.550 "rpc_get_methods" 00:09:42.550 ] 00:09:42.550 13:14:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:42.550 13:14:44 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.550 13:14:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:42.550 13:14:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:42.550 13:14:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57788 00:09:42.550 13:14:44 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57788 ']' 00:09:42.550 13:14:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57788 00:09:42.550 13:14:44 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:09:42.550 13:14:44 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.550 13:14:44 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57788 00:09:42.550 killing process with pid 57788 00:09:42.550 13:14:44 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.550 13:14:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.550 13:14:44 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57788' 00:09:42.550 13:14:44 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57788 00:09:42.550 13:14:44 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57788 00:09:42.808 ************************************ 00:09:42.808 END TEST spdkcli_tcp 00:09:42.808 ************************************ 00:09:42.808 00:09:42.808 real 0m1.327s 00:09:42.808 user 0m2.340s 00:09:42.808 sys 0m0.361s 00:09:42.808 13:14:44 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.808 13:14:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:43.066 13:14:44 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:43.066 13:14:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:43.066 13:14:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.066 13:14:44 -- common/autotest_common.sh@10 -- # set +x 00:09:43.067 ************************************ 00:09:43.067 START TEST dpdk_mem_utility 00:09:43.067 ************************************ 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:43.067 * Looking for test storage... 00:09:43.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.067 13:14:44 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:43.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.067 --rc genhtml_branch_coverage=1 00:09:43.067 --rc genhtml_function_coverage=1 00:09:43.067 --rc genhtml_legend=1 00:09:43.067 --rc geninfo_all_blocks=1 00:09:43.067 --rc geninfo_unexecuted_blocks=1 00:09:43.067 00:09:43.067 ' 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:43.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.067 --rc genhtml_branch_coverage=1 00:09:43.067 --rc genhtml_function_coverage=1 00:09:43.067 --rc genhtml_legend=1 00:09:43.067 --rc geninfo_all_blocks=1 00:09:43.067 --rc geninfo_unexecuted_blocks=1 00:09:43.067 00:09:43.067 ' 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:43.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.067 --rc genhtml_branch_coverage=1 00:09:43.067 --rc genhtml_function_coverage=1 00:09:43.067 --rc genhtml_legend=1 00:09:43.067 --rc geninfo_all_blocks=1 00:09:43.067 --rc geninfo_unexecuted_blocks=1 00:09:43.067 00:09:43.067 ' 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:43.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.067 --rc genhtml_branch_coverage=1 00:09:43.067 --rc genhtml_function_coverage=1 00:09:43.067 --rc genhtml_legend=1 00:09:43.067 --rc geninfo_all_blocks=1 00:09:43.067 --rc geninfo_unexecuted_blocks=1 00:09:43.067 00:09:43.067 ' 00:09:43.067 13:14:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:43.067 13:14:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57879 00:09:43.067 13:14:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57879 00:09:43.067 13:14:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57879 ']' 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.067 13:14:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:43.326 [2024-09-27 13:14:44.954285] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:43.326 [2024-09-27 13:14:44.954390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57879 ] 00:09:43.326 [2024-09-27 13:14:45.093441] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.326 [2024-09-27 13:14:45.153041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.585 [2024-09-27 13:14:45.193410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.151 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.151 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:09:44.151 13:14:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:44.151 13:14:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:44.151 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.151 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:44.151 { 00:09:44.151 "filename": "/tmp/spdk_mem_dump.txt" 00:09:44.151 } 00:09:44.151 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.151 13:14:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:44.413 DPDK memory size 860.000000 MiB in 1 heap(s) 00:09:44.413 1 heaps totaling size 860.000000 MiB 00:09:44.413 size: 860.000000 MiB heap id: 0 00:09:44.413 end heaps---------- 00:09:44.413 9 mempools totaling size 642.649841 MiB 00:09:44.413 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:44.413 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:44.413 size: 92.545471 MiB name: bdev_io_57879 00:09:44.413 size: 51.011292 MiB name: evtpool_57879 00:09:44.413 size: 50.003479 MiB name: msgpool_57879 00:09:44.413 size: 36.509338 MiB name: fsdev_io_57879 00:09:44.414 size: 21.763794 MiB name: PDU_Pool 00:09:44.414 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:44.414 size: 0.026123 MiB name: Session_Pool 00:09:44.414 end mempools------- 00:09:44.414 6 memzones totaling size 4.142822 MiB 00:09:44.414 size: 1.000366 MiB name: RG_ring_0_57879 00:09:44.414 size: 1.000366 MiB name: RG_ring_1_57879 00:09:44.414 size: 1.000366 MiB name: RG_ring_4_57879 00:09:44.414 size: 1.000366 MiB name: RG_ring_5_57879 00:09:44.414 size: 0.125366 MiB name: RG_ring_2_57879 00:09:44.414 size: 0.015991 MiB name: RG_ring_3_57879 00:09:44.414 end memzones------- 00:09:44.414 13:14:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:44.414 heap id: 0 total size: 860.000000 MiB number of busy elements: 306 number of free elements: 16 00:09:44.414 list of free elements. size: 13.936707 MiB 00:09:44.414 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:44.414 element at address: 0x200000800000 with size: 1.996948 MiB 00:09:44.414 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:09:44.414 element at address: 0x20001be00000 with size: 0.999878 MiB 00:09:44.414 element at address: 0x200034a00000 with size: 0.994446 MiB 00:09:44.414 element at address: 0x200009600000 with size: 0.959839 MiB 00:09:44.414 element at address: 0x200015e00000 with size: 0.954285 MiB 00:09:44.414 element at address: 0x20001c000000 with size: 0.936584 MiB 00:09:44.414 element at address: 0x200000200000 with size: 0.834839 MiB 00:09:44.414 element at address: 0x20001d800000 with size: 0.568420 MiB 00:09:44.414 element at address: 0x20000d800000 with size: 0.489258 MiB 00:09:44.414 element at address: 0x200003e00000 with size: 0.487915 MiB 00:09:44.414 element at address: 0x20001c200000 with size: 0.485657 MiB 00:09:44.414 element at address: 0x200007000000 with size: 0.480469 MiB 00:09:44.414 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:09:44.414 element at address: 0x200003a00000 with size: 0.353027 MiB 00:09:44.414 list of standard malloc elements. size: 199.266602 MiB 00:09:44.414 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:09:44.414 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:09:44.414 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:09:44.414 element at address: 0x20001befff80 with size: 1.000122 MiB 00:09:44.414 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:09:44.414 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:44.414 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:09:44.414 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:44.414 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:09:44.414 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003aff880 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:09:44.414 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x20000707b000 with size: 0.000183 MiB 00:09:44.414 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x20000707b180 with size: 0.000183 MiB 00:09:44.414 element at address: 0x20000707b240 with size: 0.000183 MiB 00:09:44.414 element at address: 0x20000707b300 with size: 0.000183 MiB 00:09:44.414 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x20000707b480 with size: 0.000183 MiB 00:09:44.414 element at address: 0x20000707b540 with size: 0.000183 MiB 00:09:44.414 element at address: 0x20000707b600 with size: 0.000183 MiB 00:09:44.414 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:09:44.414 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:09:44.415 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d891840 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d891900 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892080 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892140 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892200 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892380 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892440 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892500 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892680 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892740 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892800 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892980 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893040 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893100 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893280 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893340 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893400 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893580 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893640 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893700 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893880 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893940 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894000 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894180 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894240 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894300 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894480 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894540 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894600 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894780 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894840 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894900 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d895080 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d895140 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d895200 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d895380 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20001d895440 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:09:44.415 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:09:44.416 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:09:44.416 list of memzone associated elements. size: 646.796692 MiB 00:09:44.416 element at address: 0x20001d895500 with size: 211.416748 MiB 00:09:44.416 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:44.416 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:09:44.416 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:44.416 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:09:44.416 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57879_0 00:09:44.416 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:44.416 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57879_0 00:09:44.416 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:44.416 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57879_0 00:09:44.416 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:09:44.416 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57879_0 00:09:44.416 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:09:44.416 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:44.416 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:09:44.416 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:44.416 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:44.416 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57879 00:09:44.416 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:44.416 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57879 00:09:44.416 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:44.416 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57879 00:09:44.416 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:09:44.416 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:44.416 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:09:44.416 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:44.416 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:09:44.416 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:44.416 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:09:44.416 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:44.416 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:44.416 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57879 00:09:44.416 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:44.416 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57879 00:09:44.416 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:09:44.416 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57879 00:09:44.416 element at address: 0x200034afe940 with size: 1.000488 MiB 00:09:44.416 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57879 00:09:44.416 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:09:44.416 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57879 00:09:44.416 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:09:44.416 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57879 00:09:44.416 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:09:44.416 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:44.416 element at address: 0x20000707b780 with size: 0.500488 MiB 00:09:44.416 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:44.416 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:09:44.416 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:44.416 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:09:44.416 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57879 00:09:44.416 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:09:44.416 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:44.416 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:09:44.416 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:44.416 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:09:44.416 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57879 00:09:44.416 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:09:44.416 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:44.416 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:09:44.416 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57879 00:09:44.416 element at address: 0x200003aff940 with size: 0.000305 MiB 00:09:44.416 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57879 00:09:44.416 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:09:44.416 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57879 00:09:44.416 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:09:44.416 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:44.416 13:14:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:44.416 13:14:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57879 00:09:44.416 13:14:46 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57879 ']' 00:09:44.416 13:14:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57879 00:09:44.416 13:14:46 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:09:44.416 13:14:46 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.416 13:14:46 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57879 00:09:44.416 killing process with pid 57879 00:09:44.416 13:14:46 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.416 13:14:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.416 13:14:46 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57879' 00:09:44.416 13:14:46 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57879 00:09:44.416 13:14:46 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57879 00:09:44.675 00:09:44.675 real 0m1.709s 00:09:44.675 user 0m1.981s 00:09:44.675 sys 0m0.339s 00:09:44.675 13:14:46 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.675 13:14:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:44.675 ************************************ 00:09:44.675 END TEST dpdk_mem_utility 00:09:44.675 ************************************ 00:09:44.675 13:14:46 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:44.675 13:14:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:44.675 13:14:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.675 13:14:46 -- common/autotest_common.sh@10 -- # set +x 00:09:44.675 ************************************ 00:09:44.675 START TEST event 00:09:44.675 ************************************ 00:09:44.675 13:14:46 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:44.940 * Looking for test storage... 00:09:44.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:44.940 13:14:46 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:44.940 13:14:46 event -- common/autotest_common.sh@1681 -- # lcov --version 00:09:44.940 13:14:46 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:44.940 13:14:46 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:44.940 13:14:46 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.940 13:14:46 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.940 13:14:46 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.940 13:14:46 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.940 13:14:46 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.940 13:14:46 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.940 13:14:46 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.940 13:14:46 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.940 13:14:46 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.940 13:14:46 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.940 13:14:46 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.940 13:14:46 event -- scripts/common.sh@344 -- # case "$op" in 00:09:44.940 13:14:46 event -- scripts/common.sh@345 -- # : 1 00:09:44.940 13:14:46 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.940 13:14:46 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.940 13:14:46 event -- scripts/common.sh@365 -- # decimal 1 00:09:44.940 13:14:46 event -- scripts/common.sh@353 -- # local d=1 00:09:44.940 13:14:46 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.940 13:14:46 event -- scripts/common.sh@355 -- # echo 1 00:09:44.940 13:14:46 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.940 13:14:46 event -- scripts/common.sh@366 -- # decimal 2 00:09:44.940 13:14:46 event -- scripts/common.sh@353 -- # local d=2 00:09:44.940 13:14:46 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.940 13:14:46 event -- scripts/common.sh@355 -- # echo 2 00:09:44.940 13:14:46 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.940 13:14:46 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.940 13:14:46 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.940 13:14:46 event -- scripts/common.sh@368 -- # return 0 00:09:44.940 13:14:46 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.940 13:14:46 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:44.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.940 --rc genhtml_branch_coverage=1 00:09:44.940 --rc genhtml_function_coverage=1 00:09:44.940 --rc genhtml_legend=1 00:09:44.940 --rc geninfo_all_blocks=1 00:09:44.940 --rc geninfo_unexecuted_blocks=1 00:09:44.940 00:09:44.940 ' 00:09:44.940 13:14:46 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:44.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.940 --rc genhtml_branch_coverage=1 00:09:44.940 --rc genhtml_function_coverage=1 00:09:44.940 --rc genhtml_legend=1 00:09:44.941 --rc geninfo_all_blocks=1 00:09:44.941 --rc geninfo_unexecuted_blocks=1 00:09:44.941 00:09:44.941 ' 00:09:44.941 13:14:46 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:44.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.941 --rc genhtml_branch_coverage=1 00:09:44.941 --rc genhtml_function_coverage=1 00:09:44.941 --rc genhtml_legend=1 00:09:44.941 --rc geninfo_all_blocks=1 00:09:44.941 --rc geninfo_unexecuted_blocks=1 00:09:44.941 00:09:44.941 ' 00:09:44.941 13:14:46 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:44.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.941 --rc genhtml_branch_coverage=1 00:09:44.941 --rc genhtml_function_coverage=1 00:09:44.941 --rc genhtml_legend=1 00:09:44.941 --rc geninfo_all_blocks=1 00:09:44.941 --rc geninfo_unexecuted_blocks=1 00:09:44.941 00:09:44.941 ' 00:09:44.941 13:14:46 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:44.941 13:14:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:44.941 13:14:46 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:44.941 13:14:46 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:44.941 13:14:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.941 13:14:46 event -- common/autotest_common.sh@10 -- # set +x 00:09:44.941 ************************************ 00:09:44.941 START TEST event_perf 00:09:44.941 ************************************ 00:09:44.941 13:14:46 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:44.941 Running I/O for 1 seconds...[2024-09-27 13:14:46.666355] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:44.941 [2024-09-27 13:14:46.666568] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57959 ] 00:09:45.199 [2024-09-27 13:14:46.801785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.199 [2024-09-27 13:14:46.864820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.199 [2024-09-27 13:14:46.864944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.199 [2024-09-27 13:14:46.865077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.199 [2024-09-27 13:14:46.865081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.134 Running I/O for 1 seconds... 00:09:46.134 lcore 0: 183938 00:09:46.134 lcore 1: 183933 00:09:46.134 lcore 2: 183936 00:09:46.134 lcore 3: 183937 00:09:46.134 done. 00:09:46.392 00:09:46.392 real 0m1.333s 00:09:46.392 user 0m4.145s 00:09:46.392 sys 0m0.050s 00:09:46.392 13:14:47 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.392 13:14:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:46.392 ************************************ 00:09:46.392 END TEST event_perf 00:09:46.392 ************************************ 00:09:46.392 13:14:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:46.392 13:14:48 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:46.392 13:14:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.392 13:14:48 event -- common/autotest_common.sh@10 -- # set +x 00:09:46.392 ************************************ 00:09:46.392 START TEST event_reactor 00:09:46.392 ************************************ 00:09:46.392 13:14:48 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:46.392 [2024-09-27 13:14:48.053989] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:46.392 [2024-09-27 13:14:48.054105] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57997 ] 00:09:46.392 [2024-09-27 13:14:48.192514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.650 [2024-09-27 13:14:48.262646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.587 test_start 00:09:47.587 oneshot 00:09:47.587 tick 100 00:09:47.587 tick 100 00:09:47.587 tick 250 00:09:47.587 tick 100 00:09:47.587 tick 100 00:09:47.587 tick 250 00:09:47.587 tick 100 00:09:47.587 tick 500 00:09:47.587 tick 100 00:09:47.587 tick 100 00:09:47.587 tick 250 00:09:47.587 tick 100 00:09:47.587 tick 100 00:09:47.587 test_end 00:09:47.587 00:09:47.587 real 0m1.298s 00:09:47.587 user 0m1.147s 00:09:47.587 sys 0m0.045s 00:09:47.587 ************************************ 00:09:47.587 END TEST event_reactor 00:09:47.587 ************************************ 00:09:47.587 13:14:49 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.587 13:14:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:47.587 13:14:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:47.587 13:14:49 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:47.587 13:14:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.587 13:14:49 event -- common/autotest_common.sh@10 -- # set +x 00:09:47.587 ************************************ 00:09:47.587 START TEST event_reactor_perf 00:09:47.587 ************************************ 00:09:47.587 13:14:49 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:47.587 [2024-09-27 13:14:49.406986] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:47.587 [2024-09-27 13:14:49.407230] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58033 ] 00:09:47.845 [2024-09-27 13:14:49.545243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.845 [2024-09-27 13:14:49.606108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.222 test_start 00:09:49.222 test_end 00:09:49.223 Performance: 370599 events per second 00:09:49.223 ************************************ 00:09:49.223 END TEST event_reactor_perf 00:09:49.223 ************************************ 00:09:49.223 00:09:49.223 real 0m1.290s 00:09:49.223 user 0m1.140s 00:09:49.223 sys 0m0.044s 00:09:49.223 13:14:50 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.223 13:14:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:49.223 13:14:50 event -- event/event.sh@49 -- # uname -s 00:09:49.223 13:14:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:49.223 13:14:50 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:49.223 13:14:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:49.223 13:14:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.223 13:14:50 event -- common/autotest_common.sh@10 -- # set +x 00:09:49.223 ************************************ 00:09:49.223 START TEST event_scheduler 00:09:49.223 ************************************ 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:49.223 * Looking for test storage... 00:09:49.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.223 13:14:50 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:49.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.223 --rc genhtml_branch_coverage=1 00:09:49.223 --rc genhtml_function_coverage=1 00:09:49.223 --rc genhtml_legend=1 00:09:49.223 --rc geninfo_all_blocks=1 00:09:49.223 --rc geninfo_unexecuted_blocks=1 00:09:49.223 00:09:49.223 ' 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:49.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.223 --rc genhtml_branch_coverage=1 00:09:49.223 --rc genhtml_function_coverage=1 00:09:49.223 --rc genhtml_legend=1 00:09:49.223 --rc geninfo_all_blocks=1 00:09:49.223 --rc geninfo_unexecuted_blocks=1 00:09:49.223 00:09:49.223 ' 00:09:49.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:49.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.223 --rc genhtml_branch_coverage=1 00:09:49.223 --rc genhtml_function_coverage=1 00:09:49.223 --rc genhtml_legend=1 00:09:49.223 --rc geninfo_all_blocks=1 00:09:49.223 --rc geninfo_unexecuted_blocks=1 00:09:49.223 00:09:49.223 ' 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:49.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.223 --rc genhtml_branch_coverage=1 00:09:49.223 --rc genhtml_function_coverage=1 00:09:49.223 --rc genhtml_legend=1 00:09:49.223 --rc geninfo_all_blocks=1 00:09:49.223 --rc geninfo_unexecuted_blocks=1 00:09:49.223 00:09:49.223 ' 00:09:49.223 13:14:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:49.223 13:14:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58097 00:09:49.223 13:14:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:49.223 13:14:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58097 00:09:49.223 13:14:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58097 ']' 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.223 13:14:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:49.223 [2024-09-27 13:14:50.965380] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:49.223 [2024-09-27 13:14:50.965705] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58097 ] 00:09:49.482 [2024-09-27 13:14:51.104342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.482 [2024-09-27 13:14:51.177382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.482 [2024-09-27 13:14:51.179735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.482 [2024-09-27 13:14:51.179833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.482 [2024-09-27 13:14:51.179842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.482 13:14:51 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.482 13:14:51 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:09:49.482 13:14:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:49.482 13:14:51 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.482 13:14:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:49.482 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:49.482 POWER: Cannot set governor of lcore 0 to userspace 00:09:49.482 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:49.482 POWER: Cannot set governor of lcore 0 to performance 00:09:49.482 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:49.482 POWER: Cannot set governor of lcore 0 to userspace 00:09:49.482 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:49.482 POWER: Cannot set governor of lcore 0 to userspace 00:09:49.482 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:49.482 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:49.482 POWER: Unable to set Power Management Environment for lcore 0 00:09:49.482 [2024-09-27 13:14:51.261432] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:09:49.482 [2024-09-27 13:14:51.261444] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:09:49.482 [2024-09-27 13:14:51.261454] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:49.482 [2024-09-27 13:14:51.261466] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:49.482 [2024-09-27 13:14:51.261473] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:49.482 [2024-09-27 13:14:51.261480] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:49.482 13:14:51 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.482 13:14:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:49.482 13:14:51 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.482 13:14:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:49.482 [2024-09-27 13:14:51.298361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:49.482 [2024-09-27 13:14:51.317039] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:49.482 13:14:51 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.482 13:14:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:49.482 13:14:51 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:49.482 13:14:51 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.482 13:14:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:49.741 ************************************ 00:09:49.741 START TEST scheduler_create_thread 00:09:49.741 ************************************ 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.741 2 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.741 3 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.741 4 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.741 5 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.741 6 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.741 7 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.741 8 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.741 9 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.741 10 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.741 13:14:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:50.673 13:14:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.674 13:14:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:50.674 13:14:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.674 13:14:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:52.072 13:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.073 13:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:52.073 13:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:52.073 13:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.073 13:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:53.005 ************************************ 00:09:53.005 END TEST scheduler_create_thread 00:09:53.005 ************************************ 00:09:53.005 13:14:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.005 00:09:53.005 real 0m3.373s 00:09:53.005 user 0m0.019s 00:09:53.005 sys 0m0.005s 00:09:53.005 13:14:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.005 13:14:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:53.005 13:14:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:53.005 13:14:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58097 00:09:53.005 13:14:54 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58097 ']' 00:09:53.005 13:14:54 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58097 00:09:53.005 13:14:54 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:09:53.005 13:14:54 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.005 13:14:54 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58097 00:09:53.005 killing process with pid 58097 00:09:53.005 13:14:54 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:53.005 13:14:54 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:53.005 13:14:54 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58097' 00:09:53.005 13:14:54 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58097 00:09:53.005 13:14:54 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58097 00:09:53.262 [2024-09-27 13:14:55.081159] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:53.521 ************************************ 00:09:53.521 END TEST event_scheduler 00:09:53.521 ************************************ 00:09:53.521 00:09:53.521 real 0m4.558s 00:09:53.521 user 0m7.876s 00:09:53.521 sys 0m0.299s 00:09:53.521 13:14:55 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.521 13:14:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:53.521 13:14:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:53.521 13:14:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:53.521 13:14:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:53.521 13:14:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.521 13:14:55 event -- common/autotest_common.sh@10 -- # set +x 00:09:53.521 ************************************ 00:09:53.521 START TEST app_repeat 00:09:53.521 ************************************ 00:09:53.521 13:14:55 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:53.521 Process app_repeat pid: 58200 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58200 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58200' 00:09:53.521 spdk_app_start Round 0 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:53.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:53.521 13:14:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58200 /var/tmp/spdk-nbd.sock 00:09:53.521 13:14:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58200 ']' 00:09:53.521 13:14:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:53.521 13:14:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.521 13:14:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:53.521 13:14:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.521 13:14:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:53.521 [2024-09-27 13:14:55.366250] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:09:53.521 [2024-09-27 13:14:55.366359] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58200 ] 00:09:53.779 [2024-09-27 13:14:55.500181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:53.779 [2024-09-27 13:14:55.560114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.779 [2024-09-27 13:14:55.560126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.779 [2024-09-27 13:14:55.590104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:54.036 13:14:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.036 13:14:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:54.036 13:14:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:54.293 Malloc0 00:09:54.293 13:14:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:54.551 Malloc1 00:09:54.551 13:14:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:54.551 13:14:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:54.809 /dev/nbd0 00:09:54.809 13:14:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:54.809 13:14:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:54.809 1+0 records in 00:09:54.809 1+0 records out 00:09:54.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206871 s, 19.8 MB/s 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:54.809 13:14:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:54.809 13:14:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.809 13:14:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:54.809 13:14:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:55.066 /dev/nbd1 00:09:55.066 13:14:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:55.066 13:14:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:55.066 1+0 records in 00:09:55.066 1+0 records out 00:09:55.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231228 s, 17.7 MB/s 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:55.066 13:14:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:55.066 13:14:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.066 13:14:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:55.066 13:14:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:55.066 13:14:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.066 13:14:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:55.324 13:14:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:55.324 { 00:09:55.324 "nbd_device": "/dev/nbd0", 00:09:55.324 "bdev_name": "Malloc0" 00:09:55.324 }, 00:09:55.324 { 00:09:55.324 "nbd_device": "/dev/nbd1", 00:09:55.324 "bdev_name": "Malloc1" 00:09:55.324 } 00:09:55.324 ]' 00:09:55.324 13:14:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:55.324 { 00:09:55.324 "nbd_device": "/dev/nbd0", 00:09:55.324 "bdev_name": "Malloc0" 00:09:55.324 }, 00:09:55.324 { 00:09:55.324 "nbd_device": "/dev/nbd1", 00:09:55.324 "bdev_name": "Malloc1" 00:09:55.324 } 00:09:55.324 ]' 00:09:55.324 13:14:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:55.582 /dev/nbd1' 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:55.582 /dev/nbd1' 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:55.582 256+0 records in 00:09:55.582 256+0 records out 00:09:55.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00743752 s, 141 MB/s 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:55.582 256+0 records in 00:09:55.582 256+0 records out 00:09:55.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256773 s, 40.8 MB/s 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:55.582 256+0 records in 00:09:55.582 256+0 records out 00:09:55.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269772 s, 38.9 MB/s 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.582 13:14:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:55.840 13:14:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:55.840 13:14:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:55.840 13:14:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:55.840 13:14:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:55.840 13:14:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:55.840 13:14:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:55.840 13:14:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:55.840 13:14:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:55.840 13:14:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.840 13:14:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:56.098 13:14:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:56.098 13:14:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:56.098 13:14:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:56.098 13:14:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.098 13:14:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.098 13:14:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:56.098 13:14:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:56.098 13:14:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.098 13:14:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:56.098 13:14:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.098 13:14:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:56.663 13:14:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:56.663 13:14:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:56.663 13:14:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:56.663 13:14:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:56.663 13:14:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:56.663 13:14:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:56.663 13:14:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:56.663 13:14:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:56.663 13:14:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:56.663 13:14:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:56.663 13:14:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:56.663 13:14:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:56.663 13:14:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:56.924 13:14:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:56.924 [2024-09-27 13:14:58.667822] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:56.924 [2024-09-27 13:14:58.726676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.925 [2024-09-27 13:14:58.726662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.925 [2024-09-27 13:14:58.756205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:56.925 [2024-09-27 13:14:58.756291] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:56.925 [2024-09-27 13:14:58.756305] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:00.202 13:15:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:00.202 spdk_app_start Round 1 00:10:00.202 13:15:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:00.202 13:15:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58200 /var/tmp/spdk-nbd.sock 00:10:00.202 13:15:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58200 ']' 00:10:00.202 13:15:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:00.202 13:15:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:00.202 13:15:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:00.202 13:15:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.202 13:15:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:00.203 13:15:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.203 13:15:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:00.203 13:15:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:00.460 Malloc0 00:10:00.460 13:15:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:00.719 Malloc1 00:10:00.719 13:15:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:00.719 13:15:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:00.977 /dev/nbd0 00:10:00.977 13:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:00.977 13:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:00.977 1+0 records in 00:10:00.977 1+0 records out 00:10:00.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277225 s, 14.8 MB/s 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:00.977 13:15:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:00.977 13:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:00.977 13:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:00.977 13:15:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:01.235 /dev/nbd1 00:10:01.235 13:15:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:01.235 13:15:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:01.235 1+0 records in 00:10:01.235 1+0 records out 00:10:01.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330341 s, 12.4 MB/s 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:01.235 13:15:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:01.235 13:15:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:01.235 13:15:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:01.235 13:15:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:01.235 13:15:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:01.235 13:15:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:01.494 13:15:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:01.494 { 00:10:01.494 "nbd_device": "/dev/nbd0", 00:10:01.494 "bdev_name": "Malloc0" 00:10:01.494 }, 00:10:01.494 { 00:10:01.494 "nbd_device": "/dev/nbd1", 00:10:01.494 "bdev_name": "Malloc1" 00:10:01.494 } 00:10:01.494 ]' 00:10:01.494 13:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:01.494 { 00:10:01.494 "nbd_device": "/dev/nbd0", 00:10:01.494 "bdev_name": "Malloc0" 00:10:01.494 }, 00:10:01.494 { 00:10:01.494 "nbd_device": "/dev/nbd1", 00:10:01.494 "bdev_name": "Malloc1" 00:10:01.494 } 00:10:01.494 ]' 00:10:01.494 13:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:01.753 /dev/nbd1' 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:01.753 /dev/nbd1' 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:01.753 256+0 records in 00:10:01.753 256+0 records out 00:10:01.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00827424 s, 127 MB/s 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:01.753 256+0 records in 00:10:01.753 256+0 records out 00:10:01.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190385 s, 55.1 MB/s 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:01.753 256+0 records in 00:10:01.753 256+0 records out 00:10:01.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233976 s, 44.8 MB/s 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.753 13:15:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:02.013 13:15:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:02.013 13:15:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:02.013 13:15:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:02.013 13:15:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:02.013 13:15:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:02.013 13:15:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:02.013 13:15:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:02.013 13:15:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:02.013 13:15:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:02.013 13:15:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:02.271 13:15:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:02.271 13:15:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:02.271 13:15:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:02.271 13:15:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:02.271 13:15:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:02.271 13:15:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:02.271 13:15:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:02.271 13:15:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:02.271 13:15:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:02.271 13:15:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:02.271 13:15:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:02.529 13:15:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:02.529 13:15:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:02.529 13:15:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:02.529 13:15:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:02.787 13:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:02.787 13:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:02.787 13:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:02.787 13:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:02.787 13:15:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:02.787 13:15:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:02.787 13:15:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:02.787 13:15:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:02.787 13:15:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:03.045 13:15:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:03.045 [2024-09-27 13:15:04.779907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:03.045 [2024-09-27 13:15:04.838703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.045 [2024-09-27 13:15:04.838714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.045 [2024-09-27 13:15:04.869251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:03.045 [2024-09-27 13:15:04.869337] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:03.045 [2024-09-27 13:15:04.869352] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:06.326 13:15:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:06.326 spdk_app_start Round 2 00:10:06.326 13:15:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:06.326 13:15:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58200 /var/tmp/spdk-nbd.sock 00:10:06.326 13:15:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58200 ']' 00:10:06.326 13:15:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:06.326 13:15:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:06.326 13:15:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:06.326 13:15:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.326 13:15:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:06.326 13:15:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.326 13:15:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:06.326 13:15:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:06.326 Malloc0 00:10:06.584 13:15:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:06.843 Malloc1 00:10:06.843 13:15:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:06.843 13:15:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:07.101 /dev/nbd0 00:10:07.101 13:15:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:07.101 13:15:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:07.101 1+0 records in 00:10:07.101 1+0 records out 00:10:07.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304907 s, 13.4 MB/s 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:07.101 13:15:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:07.101 13:15:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:07.101 13:15:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:07.101 13:15:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:07.360 /dev/nbd1 00:10:07.360 13:15:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:07.360 13:15:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:07.360 1+0 records in 00:10:07.360 1+0 records out 00:10:07.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328112 s, 12.5 MB/s 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:07.360 13:15:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:07.360 13:15:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:07.360 13:15:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:07.360 13:15:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:07.360 13:15:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:07.360 13:15:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:07.618 { 00:10:07.618 "nbd_device": "/dev/nbd0", 00:10:07.618 "bdev_name": "Malloc0" 00:10:07.618 }, 00:10:07.618 { 00:10:07.618 "nbd_device": "/dev/nbd1", 00:10:07.618 "bdev_name": "Malloc1" 00:10:07.618 } 00:10:07.618 ]' 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:07.618 { 00:10:07.618 "nbd_device": "/dev/nbd0", 00:10:07.618 "bdev_name": "Malloc0" 00:10:07.618 }, 00:10:07.618 { 00:10:07.618 "nbd_device": "/dev/nbd1", 00:10:07.618 "bdev_name": "Malloc1" 00:10:07.618 } 00:10:07.618 ]' 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:07.618 /dev/nbd1' 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:07.618 /dev/nbd1' 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:07.618 13:15:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:07.619 13:15:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:07.619 256+0 records in 00:10:07.619 256+0 records out 00:10:07.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00829139 s, 126 MB/s 00:10:07.619 13:15:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:07.619 13:15:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:07.619 256+0 records in 00:10:07.619 256+0 records out 00:10:07.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249979 s, 41.9 MB/s 00:10:07.619 13:15:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:07.619 13:15:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:07.877 256+0 records in 00:10:07.877 256+0 records out 00:10:07.877 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274934 s, 38.1 MB/s 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:07.877 13:15:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:08.134 13:15:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:08.134 13:15:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:08.134 13:15:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:08.134 13:15:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:08.134 13:15:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:08.134 13:15:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:08.134 13:15:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:08.134 13:15:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:08.134 13:15:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:08.134 13:15:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:08.392 13:15:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:08.392 13:15:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:08.392 13:15:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:08.392 13:15:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:08.392 13:15:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:08.392 13:15:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:08.392 13:15:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:08.392 13:15:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:08.392 13:15:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:08.392 13:15:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.392 13:15:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:08.651 13:15:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:08.651 13:15:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:08.651 13:15:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:08.651 13:15:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:08.651 13:15:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:08.651 13:15:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:08.651 13:15:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:08.651 13:15:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:08.651 13:15:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:08.651 13:15:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:08.651 13:15:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:08.651 13:15:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:08.651 13:15:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:08.910 13:15:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:09.181 [2024-09-27 13:15:10.856398] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:09.181 [2024-09-27 13:15:10.916884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.181 [2024-09-27 13:15:10.916897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.181 [2024-09-27 13:15:10.946738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:09.181 [2024-09-27 13:15:10.946815] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:09.181 [2024-09-27 13:15:10.946838] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:12.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:12.481 13:15:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58200 /var/tmp/spdk-nbd.sock 00:10:12.481 13:15:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58200 ']' 00:10:12.481 13:15:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:12.481 13:15:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.481 13:15:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:12.481 13:15:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.481 13:15:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:12.481 13:15:14 event.app_repeat -- event/event.sh@39 -- # killprocess 58200 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58200 ']' 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58200 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58200 00:10:12.481 killing process with pid 58200 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58200' 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58200 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58200 00:10:12.481 spdk_app_start is called in Round 0. 00:10:12.481 Shutdown signal received, stop current app iteration 00:10:12.481 Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 reinitialization... 00:10:12.481 spdk_app_start is called in Round 1. 00:10:12.481 Shutdown signal received, stop current app iteration 00:10:12.481 Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 reinitialization... 00:10:12.481 spdk_app_start is called in Round 2. 00:10:12.481 Shutdown signal received, stop current app iteration 00:10:12.481 Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 reinitialization... 00:10:12.481 spdk_app_start is called in Round 3. 00:10:12.481 Shutdown signal received, stop current app iteration 00:10:12.481 13:15:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:12.481 13:15:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:12.481 00:10:12.481 real 0m18.859s 00:10:12.481 user 0m43.330s 00:10:12.481 sys 0m2.514s 00:10:12.481 ************************************ 00:10:12.481 END TEST app_repeat 00:10:12.481 ************************************ 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.481 13:15:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:12.481 13:15:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:12.481 13:15:14 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:12.481 13:15:14 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:12.481 13:15:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.481 13:15:14 event -- common/autotest_common.sh@10 -- # set +x 00:10:12.481 ************************************ 00:10:12.481 START TEST cpu_locks 00:10:12.481 ************************************ 00:10:12.481 13:15:14 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:12.481 * Looking for test storage... 00:10:12.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:12.740 13:15:14 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:12.740 13:15:14 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:10:12.740 13:15:14 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:12.740 13:15:14 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.740 13:15:14 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:12.740 13:15:14 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.740 13:15:14 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:12.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.740 --rc genhtml_branch_coverage=1 00:10:12.740 --rc genhtml_function_coverage=1 00:10:12.740 --rc genhtml_legend=1 00:10:12.740 --rc geninfo_all_blocks=1 00:10:12.740 --rc geninfo_unexecuted_blocks=1 00:10:12.740 00:10:12.740 ' 00:10:12.740 13:15:14 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:12.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.740 --rc genhtml_branch_coverage=1 00:10:12.740 --rc genhtml_function_coverage=1 00:10:12.740 --rc genhtml_legend=1 00:10:12.740 --rc geninfo_all_blocks=1 00:10:12.740 --rc geninfo_unexecuted_blocks=1 00:10:12.740 00:10:12.740 ' 00:10:12.740 13:15:14 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:12.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.740 --rc genhtml_branch_coverage=1 00:10:12.740 --rc genhtml_function_coverage=1 00:10:12.740 --rc genhtml_legend=1 00:10:12.740 --rc geninfo_all_blocks=1 00:10:12.740 --rc geninfo_unexecuted_blocks=1 00:10:12.740 00:10:12.740 ' 00:10:12.740 13:15:14 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:12.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.740 --rc genhtml_branch_coverage=1 00:10:12.740 --rc genhtml_function_coverage=1 00:10:12.740 --rc genhtml_legend=1 00:10:12.740 --rc geninfo_all_blocks=1 00:10:12.740 --rc geninfo_unexecuted_blocks=1 00:10:12.740 00:10:12.740 ' 00:10:12.740 13:15:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:12.740 13:15:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:12.740 13:15:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:12.740 13:15:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:12.740 13:15:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:12.740 13:15:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.740 13:15:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:12.740 ************************************ 00:10:12.740 START TEST default_locks 00:10:12.740 ************************************ 00:10:12.740 13:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:10:12.740 13:15:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58633 00:10:12.741 13:15:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58633 00:10:12.741 13:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58633 ']' 00:10:12.741 13:15:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:12.741 13:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.741 13:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.741 13:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.741 13:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.741 13:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:12.741 [2024-09-27 13:15:14.526363] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:12.741 [2024-09-27 13:15:14.527381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58633 ] 00:10:12.999 [2024-09-27 13:15:14.661155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.999 [2024-09-27 13:15:14.721120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.999 [2024-09-27 13:15:14.762044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:13.934 13:15:15 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.934 13:15:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:10:13.934 13:15:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58633 00:10:13.934 13:15:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:13.934 13:15:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58633 00:10:14.197 13:15:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58633 00:10:14.197 13:15:15 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58633 ']' 00:10:14.197 13:15:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58633 00:10:14.197 13:15:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:10:14.197 13:15:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:14.197 13:15:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58633 00:10:14.197 killing process with pid 58633 00:10:14.197 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:14.197 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:14.197 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58633' 00:10:14.197 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58633 00:10:14.197 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58633 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58633 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58633 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:14.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58633 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58633 ']' 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.458 ERROR: process (pid: 58633) is no longer running 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:14.458 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58633) - No such process 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:14.458 00:10:14.458 real 0m1.816s 00:10:14.458 user 0m2.049s 00:10:14.458 sys 0m0.500s 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.458 ************************************ 00:10:14.458 END TEST default_locks 00:10:14.458 ************************************ 00:10:14.458 13:15:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:14.717 13:15:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:14.717 13:15:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:14.717 13:15:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.717 13:15:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:14.717 ************************************ 00:10:14.717 START TEST default_locks_via_rpc 00:10:14.717 ************************************ 00:10:14.717 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:10:14.717 13:15:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58685 00:10:14.717 13:15:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:14.717 13:15:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58685 00:10:14.717 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58685 ']' 00:10:14.717 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.717 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.717 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.717 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.717 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.717 [2024-09-27 13:15:16.401032] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:14.717 [2024-09-27 13:15:16.401130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58685 ] 00:10:14.717 [2024-09-27 13:15:16.539455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.976 [2024-09-27 13:15:16.597930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.976 [2024-09-27 13:15:16.637383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58685 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58685 00:10:14.976 13:15:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:15.543 13:15:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58685 00:10:15.543 13:15:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58685 ']' 00:10:15.543 13:15:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58685 00:10:15.543 13:15:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:10:15.543 13:15:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.543 13:15:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58685 00:10:15.543 killing process with pid 58685 00:10:15.543 13:15:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.543 13:15:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.543 13:15:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58685' 00:10:15.543 13:15:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58685 00:10:15.543 13:15:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58685 00:10:15.802 00:10:15.802 real 0m1.200s 00:10:15.802 user 0m1.279s 00:10:15.802 sys 0m0.432s 00:10:15.802 13:15:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.802 13:15:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.802 ************************************ 00:10:15.802 END TEST default_locks_via_rpc 00:10:15.802 ************************************ 00:10:15.802 13:15:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:15.802 13:15:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:15.802 13:15:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.802 13:15:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:15.802 ************************************ 00:10:15.802 START TEST non_locking_app_on_locked_coremask 00:10:15.802 ************************************ 00:10:15.802 13:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:10:15.802 13:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58723 00:10:15.802 13:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58723 /var/tmp/spdk.sock 00:10:15.802 13:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:15.802 13:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58723 ']' 00:10:15.802 13:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.802 13:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.802 13:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.802 13:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.802 13:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:16.061 [2024-09-27 13:15:17.654485] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:16.061 [2024-09-27 13:15:17.654591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58723 ] 00:10:16.061 [2024-09-27 13:15:17.793401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.061 [2024-09-27 13:15:17.851377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.061 [2024-09-27 13:15:17.891357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:16.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:16.998 13:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.998 13:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:16.998 13:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58739 00:10:16.998 13:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58739 /var/tmp/spdk2.sock 00:10:16.998 13:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58739 ']' 00:10:16.998 13:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:16.998 13:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.998 13:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:16.998 13:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:16.998 13:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.998 13:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:16.998 [2024-09-27 13:15:18.713184] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:16.998 [2024-09-27 13:15:18.713285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58739 ] 00:10:17.257 [2024-09-27 13:15:18.854875] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:17.257 [2024-09-27 13:15:18.854927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.257 [2024-09-27 13:15:18.972066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.257 [2024-09-27 13:15:19.050382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:18.191 13:15:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:18.191 13:15:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:18.191 13:15:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58723 00:10:18.191 13:15:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58723 00:10:18.191 13:15:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:18.758 13:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58723 00:10:18.758 13:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58723 ']' 00:10:18.758 13:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58723 00:10:18.758 13:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:18.758 13:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.758 13:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58723 00:10:18.758 killing process with pid 58723 00:10:18.758 13:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:18.758 13:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:18.758 13:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58723' 00:10:18.758 13:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58723 00:10:18.758 13:15:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58723 00:10:19.325 13:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58739 00:10:19.325 13:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58739 ']' 00:10:19.325 13:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58739 00:10:19.325 13:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:19.325 13:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.325 13:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58739 00:10:19.325 killing process with pid 58739 00:10:19.325 13:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:19.325 13:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:19.325 13:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58739' 00:10:19.325 13:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58739 00:10:19.325 13:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58739 00:10:19.584 ************************************ 00:10:19.584 END TEST non_locking_app_on_locked_coremask 00:10:19.584 ************************************ 00:10:19.584 00:10:19.584 real 0m3.732s 00:10:19.584 user 0m4.479s 00:10:19.584 sys 0m0.885s 00:10:19.584 13:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.584 13:15:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:19.584 13:15:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:19.584 13:15:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:19.584 13:15:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.584 13:15:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:19.584 ************************************ 00:10:19.584 START TEST locking_app_on_unlocked_coremask 00:10:19.584 ************************************ 00:10:19.584 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:10:19.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.584 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58806 00:10:19.584 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58806 /var/tmp/spdk.sock 00:10:19.584 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58806 ']' 00:10:19.584 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.584 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.584 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.584 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:19.584 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.584 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:19.843 [2024-09-27 13:15:21.436718] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:19.843 [2024-09-27 13:15:21.436819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58806 ] 00:10:19.843 [2024-09-27 13:15:21.575954] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:19.843 [2024-09-27 13:15:21.575999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.843 [2024-09-27 13:15:21.631797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.843 [2024-09-27 13:15:21.670133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:20.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:20.101 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.101 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:20.101 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:20.101 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58815 00:10:20.101 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58815 /var/tmp/spdk2.sock 00:10:20.101 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58815 ']' 00:10:20.101 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:20.101 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.101 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:20.101 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.101 13:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:20.101 [2024-09-27 13:15:21.846428] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:20.101 [2024-09-27 13:15:21.846717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58815 ] 00:10:20.359 [2024-09-27 13:15:21.987789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.359 [2024-09-27 13:15:22.103381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.359 [2024-09-27 13:15:22.176865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:21.301 13:15:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.301 13:15:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:21.301 13:15:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58815 00:10:21.301 13:15:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58815 00:10:21.301 13:15:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:22.257 13:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58806 00:10:22.257 13:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58806 ']' 00:10:22.257 13:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58806 00:10:22.257 13:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:22.257 13:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.257 13:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58806 00:10:22.257 killing process with pid 58806 00:10:22.257 13:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.257 13:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.257 13:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58806' 00:10:22.257 13:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58806 00:10:22.257 13:15:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58806 00:10:22.823 13:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58815 00:10:22.823 13:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58815 ']' 00:10:22.823 13:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58815 00:10:22.823 13:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:22.823 13:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.823 13:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58815 00:10:22.823 killing process with pid 58815 00:10:22.823 13:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.823 13:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.823 13:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58815' 00:10:22.823 13:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58815 00:10:22.823 13:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58815 00:10:22.823 ************************************ 00:10:22.823 END TEST locking_app_on_unlocked_coremask 00:10:22.823 ************************************ 00:10:22.823 00:10:22.823 real 0m3.292s 00:10:22.823 user 0m3.849s 00:10:22.823 sys 0m0.945s 00:10:22.823 13:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.823 13:15:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:23.081 13:15:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:23.081 13:15:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:23.081 13:15:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.081 13:15:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:23.081 ************************************ 00:10:23.081 START TEST locking_app_on_locked_coremask 00:10:23.081 ************************************ 00:10:23.081 13:15:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:10:23.081 13:15:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58876 00:10:23.081 13:15:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:23.081 13:15:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58876 /var/tmp/spdk.sock 00:10:23.081 13:15:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58876 ']' 00:10:23.081 13:15:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.081 13:15:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.081 13:15:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.081 13:15:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.081 13:15:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:23.081 [2024-09-27 13:15:24.779192] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:23.081 [2024-09-27 13:15:24.779496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58876 ] 00:10:23.081 [2024-09-27 13:15:24.914858] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.339 [2024-09-27 13:15:24.972482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.339 [2024-09-27 13:15:25.012849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58885 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58885 /var/tmp/spdk2.sock 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58885 /var/tmp/spdk2.sock 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58885 /var/tmp/spdk2.sock 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58885 ']' 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:23.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.339 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:23.597 [2024-09-27 13:15:25.203014] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:23.597 [2024-09-27 13:15:25.203362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58885 ] 00:10:23.597 [2024-09-27 13:15:25.346152] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58876 has claimed it. 00:10:23.597 [2024-09-27 13:15:25.346223] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:24.163 ERROR: process (pid: 58885) is no longer running 00:10:24.163 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58885) - No such process 00:10:24.163 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:24.163 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:24.163 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:24.163 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:24.163 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:24.163 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:24.163 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58876 00:10:24.163 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58876 00:10:24.163 13:15:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:24.728 13:15:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58876 00:10:24.728 13:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58876 ']' 00:10:24.728 13:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58876 00:10:24.728 13:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:24.728 13:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.728 13:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58876 00:10:24.728 killing process with pid 58876 00:10:24.728 13:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:24.728 13:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:24.728 13:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58876' 00:10:24.728 13:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58876 00:10:24.728 13:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58876 00:10:24.986 00:10:24.986 real 0m1.969s 00:10:24.986 user 0m2.308s 00:10:24.986 sys 0m0.540s 00:10:24.986 13:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.986 13:15:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:24.986 ************************************ 00:10:24.986 END TEST locking_app_on_locked_coremask 00:10:24.986 ************************************ 00:10:24.986 13:15:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:24.986 13:15:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:24.986 13:15:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.986 13:15:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:24.986 ************************************ 00:10:24.986 START TEST locking_overlapped_coremask 00:10:24.986 ************************************ 00:10:24.986 13:15:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:10:24.986 13:15:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58930 00:10:24.986 13:15:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:24.986 13:15:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58930 /var/tmp/spdk.sock 00:10:24.986 13:15:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 58930 ']' 00:10:24.986 13:15:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.986 13:15:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.986 13:15:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.986 13:15:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.986 13:15:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:24.986 [2024-09-27 13:15:26.797163] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:24.986 [2024-09-27 13:15:26.797280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58930 ] 00:10:25.244 [2024-09-27 13:15:26.937179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:25.244 [2024-09-27 13:15:27.003345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.244 [2024-09-27 13:15:27.003489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.244 [2024-09-27 13:15:27.003507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.244 [2024-09-27 13:15:27.045683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58941 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58941 /var/tmp/spdk2.sock 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58941 /var/tmp/spdk2.sock 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58941 /var/tmp/spdk2.sock 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 58941 ']' 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:25.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.501 13:15:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:25.501 [2024-09-27 13:15:27.245042] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:25.501 [2024-09-27 13:15:27.245140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58941 ] 00:10:25.758 [2024-09-27 13:15:27.390779] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58930 has claimed it. 00:10:25.758 [2024-09-27 13:15:27.390891] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:26.323 ERROR: process (pid: 58941) is no longer running 00:10:26.323 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58941) - No such process 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58930 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 58930 ']' 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 58930 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58930 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58930' 00:10:26.323 killing process with pid 58930 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 58930 00:10:26.323 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 58930 00:10:26.581 00:10:26.581 real 0m1.582s 00:10:26.581 user 0m4.366s 00:10:26.581 sys 0m0.306s 00:10:26.581 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.581 13:15:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:26.581 ************************************ 00:10:26.581 END TEST locking_overlapped_coremask 00:10:26.581 ************************************ 00:10:26.581 13:15:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:26.581 13:15:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:26.581 13:15:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.581 13:15:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:26.581 ************************************ 00:10:26.582 START TEST locking_overlapped_coremask_via_rpc 00:10:26.582 ************************************ 00:10:26.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.582 13:15:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:10:26.582 13:15:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58981 00:10:26.582 13:15:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58981 /var/tmp/spdk.sock 00:10:26.582 13:15:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58981 ']' 00:10:26.582 13:15:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:26.582 13:15:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.582 13:15:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.582 13:15:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.582 13:15:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.582 13:15:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.840 [2024-09-27 13:15:28.438308] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:26.840 [2024-09-27 13:15:28.438413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58981 ] 00:10:26.840 [2024-09-27 13:15:28.572628] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:26.840 [2024-09-27 13:15:28.572677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:26.840 [2024-09-27 13:15:28.634321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.840 [2024-09-27 13:15:28.634459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.840 [2024-09-27 13:15:28.634459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.840 [2024-09-27 13:15:28.675033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:27.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:27.781 13:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:27.781 13:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:27.781 13:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58999 00:10:27.781 13:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:27.781 13:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58999 /var/tmp/spdk2.sock 00:10:27.781 13:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58999 ']' 00:10:27.781 13:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:27.781 13:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:27.781 13:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:27.781 13:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:27.781 13:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.781 [2024-09-27 13:15:29.461990] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:27.781 [2024-09-27 13:15:29.462532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58999 ] 00:10:27.781 [2024-09-27 13:15:29.608352] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:27.781 [2024-09-27 13:15:29.608400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:28.040 [2024-09-27 13:15:29.738845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.040 [2024-09-27 13:15:29.738911] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:28.040 [2024-09-27 13:15:29.738916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.040 [2024-09-27 13:15:29.824251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.977 [2024-09-27 13:15:30.514975] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58981 has claimed it. 00:10:28.977 request: 00:10:28.977 { 00:10:28.977 "method": "framework_enable_cpumask_locks", 00:10:28.977 "req_id": 1 00:10:28.977 } 00:10:28.977 Got JSON-RPC error response 00:10:28.977 response: 00:10:28.977 { 00:10:28.977 "code": -32603, 00:10:28.977 "message": "Failed to claim CPU core: 2" 00:10:28.977 } 00:10:28.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58981 /var/tmp/spdk.sock 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58981 ']' 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58999 /var/tmp/spdk2.sock 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58999 ']' 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:28.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:28.977 13:15:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.236 13:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.236 13:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:29.236 13:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:29.236 13:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:29.236 13:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:29.236 13:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:29.236 00:10:29.236 real 0m2.712s 00:10:29.236 user 0m1.451s 00:10:29.236 sys 0m0.187s 00:10:29.236 13:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.236 13:15:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.494 ************************************ 00:10:29.494 END TEST locking_overlapped_coremask_via_rpc 00:10:29.494 ************************************ 00:10:29.494 13:15:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:29.494 13:15:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58981 ]] 00:10:29.494 13:15:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58981 00:10:29.494 13:15:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 58981 ']' 00:10:29.494 13:15:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 58981 00:10:29.494 13:15:31 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:29.494 13:15:31 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.494 13:15:31 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58981 00:10:29.494 killing process with pid 58981 00:10:29.494 13:15:31 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.494 13:15:31 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.494 13:15:31 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58981' 00:10:29.494 13:15:31 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 58981 00:10:29.494 13:15:31 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 58981 00:10:29.753 13:15:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58999 ]] 00:10:29.753 13:15:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58999 00:10:29.753 13:15:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 58999 ']' 00:10:29.753 13:15:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 58999 00:10:29.753 13:15:31 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:29.753 13:15:31 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.753 13:15:31 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58999 00:10:29.753 killing process with pid 58999 00:10:29.753 13:15:31 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:10:29.753 13:15:31 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:10:29.753 13:15:31 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58999' 00:10:29.753 13:15:31 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 58999 00:10:29.753 13:15:31 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 58999 00:10:30.013 13:15:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:30.013 13:15:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:30.013 13:15:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58981 ]] 00:10:30.013 13:15:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58981 00:10:30.013 Process with pid 58981 is not found 00:10:30.013 13:15:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 58981 ']' 00:10:30.013 13:15:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 58981 00:10:30.013 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (58981) - No such process 00:10:30.013 13:15:31 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 58981 is not found' 00:10:30.013 13:15:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58999 ]] 00:10:30.013 13:15:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58999 00:10:30.013 13:15:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 58999 ']' 00:10:30.013 Process with pid 58999 is not found 00:10:30.013 13:15:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 58999 00:10:30.013 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (58999) - No such process 00:10:30.013 13:15:31 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 58999 is not found' 00:10:30.013 13:15:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:30.013 00:10:30.013 real 0m17.511s 00:10:30.013 user 0m32.279s 00:10:30.013 sys 0m4.494s 00:10:30.013 13:15:31 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.013 ************************************ 00:10:30.013 END TEST cpu_locks 00:10:30.013 ************************************ 00:10:30.013 13:15:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:30.013 ************************************ 00:10:30.013 END TEST event 00:10:30.013 ************************************ 00:10:30.013 00:10:30.013 real 0m45.345s 00:10:30.013 user 1m30.118s 00:10:30.013 sys 0m7.711s 00:10:30.013 13:15:31 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.013 13:15:31 event -- common/autotest_common.sh@10 -- # set +x 00:10:30.013 13:15:31 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:30.013 13:15:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:30.013 13:15:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.013 13:15:31 -- common/autotest_common.sh@10 -- # set +x 00:10:30.013 ************************************ 00:10:30.013 START TEST thread 00:10:30.013 ************************************ 00:10:30.013 13:15:31 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:30.271 * Looking for test storage... 00:10:30.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:30.272 13:15:31 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:30.272 13:15:31 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:10:30.272 13:15:31 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:30.272 13:15:32 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:30.272 13:15:32 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.272 13:15:32 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.272 13:15:32 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.272 13:15:32 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.272 13:15:32 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.272 13:15:32 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.272 13:15:32 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.272 13:15:32 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.272 13:15:32 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.272 13:15:32 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.272 13:15:32 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.272 13:15:32 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:30.272 13:15:32 thread -- scripts/common.sh@345 -- # : 1 00:10:30.272 13:15:32 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.272 13:15:32 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.272 13:15:32 thread -- scripts/common.sh@365 -- # decimal 1 00:10:30.272 13:15:32 thread -- scripts/common.sh@353 -- # local d=1 00:10:30.272 13:15:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.272 13:15:32 thread -- scripts/common.sh@355 -- # echo 1 00:10:30.272 13:15:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.272 13:15:32 thread -- scripts/common.sh@366 -- # decimal 2 00:10:30.272 13:15:32 thread -- scripts/common.sh@353 -- # local d=2 00:10:30.272 13:15:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.272 13:15:32 thread -- scripts/common.sh@355 -- # echo 2 00:10:30.272 13:15:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.272 13:15:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.272 13:15:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.272 13:15:32 thread -- scripts/common.sh@368 -- # return 0 00:10:30.272 13:15:32 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.272 13:15:32 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:30.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.272 --rc genhtml_branch_coverage=1 00:10:30.272 --rc genhtml_function_coverage=1 00:10:30.272 --rc genhtml_legend=1 00:10:30.272 --rc geninfo_all_blocks=1 00:10:30.272 --rc geninfo_unexecuted_blocks=1 00:10:30.272 00:10:30.272 ' 00:10:30.272 13:15:32 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:30.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.272 --rc genhtml_branch_coverage=1 00:10:30.272 --rc genhtml_function_coverage=1 00:10:30.272 --rc genhtml_legend=1 00:10:30.272 --rc geninfo_all_blocks=1 00:10:30.272 --rc geninfo_unexecuted_blocks=1 00:10:30.272 00:10:30.272 ' 00:10:30.272 13:15:32 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:30.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.272 --rc genhtml_branch_coverage=1 00:10:30.272 --rc genhtml_function_coverage=1 00:10:30.272 --rc genhtml_legend=1 00:10:30.272 --rc geninfo_all_blocks=1 00:10:30.272 --rc geninfo_unexecuted_blocks=1 00:10:30.272 00:10:30.272 ' 00:10:30.272 13:15:32 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:30.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.272 --rc genhtml_branch_coverage=1 00:10:30.272 --rc genhtml_function_coverage=1 00:10:30.272 --rc genhtml_legend=1 00:10:30.272 --rc geninfo_all_blocks=1 00:10:30.272 --rc geninfo_unexecuted_blocks=1 00:10:30.272 00:10:30.272 ' 00:10:30.272 13:15:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:30.272 13:15:32 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:30.272 13:15:32 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.272 13:15:32 thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.272 ************************************ 00:10:30.272 START TEST thread_poller_perf 00:10:30.272 ************************************ 00:10:30.272 13:15:32 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:30.272 [2024-09-27 13:15:32.045401] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:30.272 [2024-09-27 13:15:32.045477] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59135 ] 00:10:30.530 [2024-09-27 13:15:32.179511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.530 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:30.530 [2024-09-27 13:15:32.251837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.907 ====================================== 00:10:31.907 busy:2209034584 (cyc) 00:10:31.907 total_run_count: 285000 00:10:31.907 tsc_hz: 2200000000 (cyc) 00:10:31.907 ====================================== 00:10:31.907 poller_cost: 7750 (cyc), 3522 (nsec) 00:10:31.907 00:10:31.907 real 0m1.304s 00:10:31.907 user 0m1.150s 00:10:31.907 sys 0m0.048s 00:10:31.907 ************************************ 00:10:31.907 END TEST thread_poller_perf 00:10:31.907 ************************************ 00:10:31.907 13:15:33 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.907 13:15:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:31.907 13:15:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:31.907 13:15:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:31.907 13:15:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.907 13:15:33 thread -- common/autotest_common.sh@10 -- # set +x 00:10:31.907 ************************************ 00:10:31.907 START TEST thread_poller_perf 00:10:31.907 ************************************ 00:10:31.907 13:15:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:31.907 [2024-09-27 13:15:33.403434] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:31.907 [2024-09-27 13:15:33.403520] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59165 ] 00:10:31.907 [2024-09-27 13:15:33.540796] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.907 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:31.907 [2024-09-27 13:15:33.601011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.843 ====================================== 00:10:32.843 busy:2201775250 (cyc) 00:10:32.843 total_run_count: 4166000 00:10:32.843 tsc_hz: 2200000000 (cyc) 00:10:32.843 ====================================== 00:10:32.843 poller_cost: 528 (cyc), 240 (nsec) 00:10:32.843 00:10:32.843 real 0m1.285s 00:10:32.843 user 0m1.134s 00:10:32.843 sys 0m0.044s 00:10:32.843 13:15:34 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.843 ************************************ 00:10:32.843 END TEST thread_poller_perf 00:10:32.843 ************************************ 00:10:32.843 13:15:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:33.102 13:15:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:33.102 00:10:33.102 real 0m2.863s 00:10:33.102 user 0m2.437s 00:10:33.102 sys 0m0.212s 00:10:33.102 13:15:34 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.102 ************************************ 00:10:33.102 END TEST thread 00:10:33.102 ************************************ 00:10:33.102 13:15:34 thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.102 13:15:34 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:33.102 13:15:34 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:33.102 13:15:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:33.102 13:15:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.102 13:15:34 -- common/autotest_common.sh@10 -- # set +x 00:10:33.102 ************************************ 00:10:33.102 START TEST app_cmdline 00:10:33.102 ************************************ 00:10:33.102 13:15:34 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:33.102 * Looking for test storage... 00:10:33.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:33.102 13:15:34 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:33.102 13:15:34 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:10:33.102 13:15:34 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:33.102 13:15:34 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.102 13:15:34 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:33.102 13:15:34 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.102 13:15:34 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:33.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.102 --rc genhtml_branch_coverage=1 00:10:33.102 --rc genhtml_function_coverage=1 00:10:33.102 --rc genhtml_legend=1 00:10:33.102 --rc geninfo_all_blocks=1 00:10:33.102 --rc geninfo_unexecuted_blocks=1 00:10:33.103 00:10:33.103 ' 00:10:33.103 13:15:34 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:33.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.103 --rc genhtml_branch_coverage=1 00:10:33.103 --rc genhtml_function_coverage=1 00:10:33.103 --rc genhtml_legend=1 00:10:33.103 --rc geninfo_all_blocks=1 00:10:33.103 --rc geninfo_unexecuted_blocks=1 00:10:33.103 00:10:33.103 ' 00:10:33.103 13:15:34 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:33.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.103 --rc genhtml_branch_coverage=1 00:10:33.103 --rc genhtml_function_coverage=1 00:10:33.103 --rc genhtml_legend=1 00:10:33.103 --rc geninfo_all_blocks=1 00:10:33.103 --rc geninfo_unexecuted_blocks=1 00:10:33.103 00:10:33.103 ' 00:10:33.103 13:15:34 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:33.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.103 --rc genhtml_branch_coverage=1 00:10:33.103 --rc genhtml_function_coverage=1 00:10:33.103 --rc genhtml_legend=1 00:10:33.103 --rc geninfo_all_blocks=1 00:10:33.103 --rc geninfo_unexecuted_blocks=1 00:10:33.103 00:10:33.103 ' 00:10:33.103 13:15:34 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:33.103 13:15:34 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59247 00:10:33.103 13:15:34 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59247 00:10:33.103 13:15:34 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59247 ']' 00:10:33.103 13:15:34 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:33.103 13:15:34 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.103 13:15:34 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:33.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.103 13:15:34 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.103 13:15:34 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:33.103 13:15:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:33.361 [2024-09-27 13:15:35.003105] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:33.361 [2024-09-27 13:15:35.003234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59247 ] 00:10:33.361 [2024-09-27 13:15:35.138070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.361 [2024-09-27 13:15:35.197841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.620 [2024-09-27 13:15:35.238762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:33.620 13:15:35 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.620 13:15:35 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:10:33.620 13:15:35 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:33.879 { 00:10:33.879 "version": "SPDK v25.01-pre git sha1 71dc0c1e9", 00:10:33.879 "fields": { 00:10:33.879 "major": 25, 00:10:33.879 "minor": 1, 00:10:33.879 "patch": 0, 00:10:33.879 "suffix": "-pre", 00:10:33.879 "commit": "71dc0c1e9" 00:10:33.879 } 00:10:33.879 } 00:10:33.879 13:15:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:33.879 13:15:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:33.879 13:15:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:33.879 13:15:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:33.879 13:15:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:33.879 13:15:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:33.879 13:15:35 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.879 13:15:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:33.879 13:15:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:33.879 13:15:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:33.879 13:15:35 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:34.139 request: 00:10:34.139 { 00:10:34.139 "method": "env_dpdk_get_mem_stats", 00:10:34.139 "req_id": 1 00:10:34.139 } 00:10:34.139 Got JSON-RPC error response 00:10:34.139 response: 00:10:34.139 { 00:10:34.139 "code": -32601, 00:10:34.139 "message": "Method not found" 00:10:34.139 } 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:34.139 13:15:35 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59247 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59247 ']' 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59247 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59247 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.139 killing process with pid 59247 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59247' 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@969 -- # kill 59247 00:10:34.139 13:15:35 app_cmdline -- common/autotest_common.sh@974 -- # wait 59247 00:10:34.399 00:10:34.399 real 0m1.472s 00:10:34.399 user 0m1.864s 00:10:34.399 sys 0m0.376s 00:10:34.399 13:15:36 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.399 ************************************ 00:10:34.399 13:15:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:34.399 END TEST app_cmdline 00:10:34.399 ************************************ 00:10:34.658 13:15:36 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:34.658 13:15:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:34.658 13:15:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.658 13:15:36 -- common/autotest_common.sh@10 -- # set +x 00:10:34.658 ************************************ 00:10:34.658 START TEST version 00:10:34.658 ************************************ 00:10:34.658 13:15:36 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:34.658 * Looking for test storage... 00:10:34.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:34.658 13:15:36 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:34.658 13:15:36 version -- common/autotest_common.sh@1681 -- # lcov --version 00:10:34.658 13:15:36 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:34.658 13:15:36 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:34.658 13:15:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.658 13:15:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.658 13:15:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.658 13:15:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.658 13:15:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.658 13:15:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.658 13:15:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.658 13:15:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.658 13:15:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.658 13:15:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.658 13:15:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.658 13:15:36 version -- scripts/common.sh@344 -- # case "$op" in 00:10:34.658 13:15:36 version -- scripts/common.sh@345 -- # : 1 00:10:34.658 13:15:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.658 13:15:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.658 13:15:36 version -- scripts/common.sh@365 -- # decimal 1 00:10:34.658 13:15:36 version -- scripts/common.sh@353 -- # local d=1 00:10:34.658 13:15:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.658 13:15:36 version -- scripts/common.sh@355 -- # echo 1 00:10:34.658 13:15:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.658 13:15:36 version -- scripts/common.sh@366 -- # decimal 2 00:10:34.658 13:15:36 version -- scripts/common.sh@353 -- # local d=2 00:10:34.658 13:15:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.658 13:15:36 version -- scripts/common.sh@355 -- # echo 2 00:10:34.658 13:15:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.658 13:15:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.658 13:15:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.658 13:15:36 version -- scripts/common.sh@368 -- # return 0 00:10:34.658 13:15:36 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.658 13:15:36 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.658 --rc genhtml_branch_coverage=1 00:10:34.658 --rc genhtml_function_coverage=1 00:10:34.658 --rc genhtml_legend=1 00:10:34.658 --rc geninfo_all_blocks=1 00:10:34.658 --rc geninfo_unexecuted_blocks=1 00:10:34.658 00:10:34.658 ' 00:10:34.658 13:15:36 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.658 --rc genhtml_branch_coverage=1 00:10:34.658 --rc genhtml_function_coverage=1 00:10:34.658 --rc genhtml_legend=1 00:10:34.658 --rc geninfo_all_blocks=1 00:10:34.658 --rc geninfo_unexecuted_blocks=1 00:10:34.658 00:10:34.658 ' 00:10:34.658 13:15:36 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.658 --rc genhtml_branch_coverage=1 00:10:34.658 --rc genhtml_function_coverage=1 00:10:34.658 --rc genhtml_legend=1 00:10:34.658 --rc geninfo_all_blocks=1 00:10:34.658 --rc geninfo_unexecuted_blocks=1 00:10:34.658 00:10:34.658 ' 00:10:34.658 13:15:36 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.658 --rc genhtml_branch_coverage=1 00:10:34.658 --rc genhtml_function_coverage=1 00:10:34.658 --rc genhtml_legend=1 00:10:34.658 --rc geninfo_all_blocks=1 00:10:34.658 --rc geninfo_unexecuted_blocks=1 00:10:34.658 00:10:34.658 ' 00:10:34.658 13:15:36 version -- app/version.sh@17 -- # get_header_version major 00:10:34.658 13:15:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:34.658 13:15:36 version -- app/version.sh@14 -- # cut -f2 00:10:34.658 13:15:36 version -- app/version.sh@14 -- # tr -d '"' 00:10:34.658 13:15:36 version -- app/version.sh@17 -- # major=25 00:10:34.658 13:15:36 version -- app/version.sh@18 -- # get_header_version minor 00:10:34.658 13:15:36 version -- app/version.sh@14 -- # cut -f2 00:10:34.658 13:15:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:34.658 13:15:36 version -- app/version.sh@14 -- # tr -d '"' 00:10:34.658 13:15:36 version -- app/version.sh@18 -- # minor=1 00:10:34.658 13:15:36 version -- app/version.sh@19 -- # get_header_version patch 00:10:34.658 13:15:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:34.658 13:15:36 version -- app/version.sh@14 -- # cut -f2 00:10:34.658 13:15:36 version -- app/version.sh@14 -- # tr -d '"' 00:10:34.658 13:15:36 version -- app/version.sh@19 -- # patch=0 00:10:34.658 13:15:36 version -- app/version.sh@20 -- # get_header_version suffix 00:10:34.658 13:15:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:34.658 13:15:36 version -- app/version.sh@14 -- # cut -f2 00:10:34.658 13:15:36 version -- app/version.sh@14 -- # tr -d '"' 00:10:34.658 13:15:36 version -- app/version.sh@20 -- # suffix=-pre 00:10:34.658 13:15:36 version -- app/version.sh@22 -- # version=25.1 00:10:34.658 13:15:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:34.658 13:15:36 version -- app/version.sh@28 -- # version=25.1rc0 00:10:34.658 13:15:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:34.658 13:15:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:34.917 13:15:36 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:34.917 13:15:36 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:34.917 00:10:34.917 real 0m0.252s 00:10:34.917 user 0m0.157s 00:10:34.917 sys 0m0.130s 00:10:34.917 13:15:36 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.917 13:15:36 version -- common/autotest_common.sh@10 -- # set +x 00:10:34.917 ************************************ 00:10:34.917 END TEST version 00:10:34.917 ************************************ 00:10:34.917 13:15:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:34.917 13:15:36 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:34.917 13:15:36 -- spdk/autotest.sh@194 -- # uname -s 00:10:34.917 13:15:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:34.917 13:15:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:34.917 13:15:36 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:10:34.917 13:15:36 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:10:34.917 13:15:36 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:10:34.917 13:15:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:34.917 13:15:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.917 13:15:36 -- common/autotest_common.sh@10 -- # set +x 00:10:34.917 ************************************ 00:10:34.917 START TEST spdk_dd 00:10:34.917 ************************************ 00:10:34.917 13:15:36 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:10:34.917 * Looking for test storage... 00:10:34.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:34.917 13:15:36 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:34.917 13:15:36 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:10:34.917 13:15:36 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:34.917 13:15:36 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@345 -- # : 1 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.917 13:15:36 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@368 -- # return 0 00:10:35.177 13:15:36 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.177 13:15:36 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:35.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.177 --rc genhtml_branch_coverage=1 00:10:35.177 --rc genhtml_function_coverage=1 00:10:35.177 --rc genhtml_legend=1 00:10:35.177 --rc geninfo_all_blocks=1 00:10:35.177 --rc geninfo_unexecuted_blocks=1 00:10:35.177 00:10:35.177 ' 00:10:35.177 13:15:36 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:35.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.177 --rc genhtml_branch_coverage=1 00:10:35.177 --rc genhtml_function_coverage=1 00:10:35.177 --rc genhtml_legend=1 00:10:35.177 --rc geninfo_all_blocks=1 00:10:35.177 --rc geninfo_unexecuted_blocks=1 00:10:35.177 00:10:35.177 ' 00:10:35.177 13:15:36 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:35.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.177 --rc genhtml_branch_coverage=1 00:10:35.177 --rc genhtml_function_coverage=1 00:10:35.177 --rc genhtml_legend=1 00:10:35.177 --rc geninfo_all_blocks=1 00:10:35.177 --rc geninfo_unexecuted_blocks=1 00:10:35.177 00:10:35.177 ' 00:10:35.177 13:15:36 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:35.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.177 --rc genhtml_branch_coverage=1 00:10:35.177 --rc genhtml_function_coverage=1 00:10:35.177 --rc genhtml_legend=1 00:10:35.177 --rc geninfo_all_blocks=1 00:10:35.177 --rc geninfo_unexecuted_blocks=1 00:10:35.177 00:10:35.177 ' 00:10:35.177 13:15:36 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.177 13:15:36 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.177 13:15:36 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.177 13:15:36 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.177 13:15:36 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.177 13:15:36 spdk_dd -- paths/export.sh@5 -- # export PATH 00:10:35.177 13:15:36 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.178 13:15:36 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:35.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:35.437 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:35.437 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:35.437 13:15:37 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:10:35.437 13:15:37 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:10:35.437 13:15:37 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:10:35.437 13:15:37 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:10:35.437 13:15:37 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@233 -- # local class 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@235 -- # local progif 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@236 -- # class=01 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@18 -- # local i 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@27 -- # return 0 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@18 -- # local i 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@27 -- # return 0 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:10:35.438 13:15:37 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:35.438 13:15:37 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@139 -- # local lib 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:10:35.438 * spdk_dd linked to liburing 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:35.438 13:15:37 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:35.438 13:15:37 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:35.439 13:15:37 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:10:35.439 13:15:37 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:10:35.439 13:15:37 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:10:35.439 13:15:37 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:10:35.439 13:15:37 spdk_dd -- dd/common.sh@153 -- # return 0 00:10:35.439 13:15:37 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:10:35.439 13:15:37 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:10:35.439 13:15:37 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:35.439 13:15:37 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.439 13:15:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:35.439 ************************************ 00:10:35.439 START TEST spdk_dd_basic_rw 00:10:35.439 ************************************ 00:10:35.439 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:10:35.698 * Looking for test storage... 00:10:35.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:35.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.698 --rc genhtml_branch_coverage=1 00:10:35.698 --rc genhtml_function_coverage=1 00:10:35.698 --rc genhtml_legend=1 00:10:35.698 --rc geninfo_all_blocks=1 00:10:35.698 --rc geninfo_unexecuted_blocks=1 00:10:35.698 00:10:35.698 ' 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:35.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.698 --rc genhtml_branch_coverage=1 00:10:35.698 --rc genhtml_function_coverage=1 00:10:35.698 --rc genhtml_legend=1 00:10:35.698 --rc geninfo_all_blocks=1 00:10:35.698 --rc geninfo_unexecuted_blocks=1 00:10:35.698 00:10:35.698 ' 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:35.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.698 --rc genhtml_branch_coverage=1 00:10:35.698 --rc genhtml_function_coverage=1 00:10:35.698 --rc genhtml_legend=1 00:10:35.698 --rc geninfo_all_blocks=1 00:10:35.698 --rc geninfo_unexecuted_blocks=1 00:10:35.698 00:10:35.698 ' 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:35.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.698 --rc genhtml_branch_coverage=1 00:10:35.698 --rc genhtml_function_coverage=1 00:10:35.698 --rc genhtml_legend=1 00:10:35.698 --rc geninfo_all_blocks=1 00:10:35.698 --rc geninfo_unexecuted_blocks=1 00:10:35.698 00:10:35.698 ' 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:10:35.698 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:10:35.959 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:10:35.959 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:35.960 ************************************ 00:10:35.960 START TEST dd_bs_lt_native_bs 00:10:35.960 ************************************ 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:35.960 13:15:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:35.960 { 00:10:35.960 "subsystems": [ 00:10:35.960 { 00:10:35.960 "subsystem": "bdev", 00:10:35.960 "config": [ 00:10:35.960 { 00:10:35.960 "params": { 00:10:35.960 "trtype": "pcie", 00:10:35.960 "traddr": "0000:00:10.0", 00:10:35.960 "name": "Nvme0" 00:10:35.960 }, 00:10:35.960 "method": "bdev_nvme_attach_controller" 00:10:35.960 }, 00:10:35.960 { 00:10:35.960 "method": "bdev_wait_for_examine" 00:10:35.960 } 00:10:35.960 ] 00:10:35.960 } 00:10:35.960 ] 00:10:35.960 } 00:10:35.960 [2024-09-27 13:15:37.689222] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:35.960 [2024-09-27 13:15:37.689325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59586 ] 00:10:36.219 [2024-09-27 13:15:37.823760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.219 [2024-09-27 13:15:37.882756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.219 [2024-09-27 13:15:37.913748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:36.219 [2024-09-27 13:15:38.004450] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:10:36.219 [2024-09-27 13:15:38.004530] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:36.479 [2024-09-27 13:15:38.075097] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:36.479 00:10:36.479 real 0m0.528s 00:10:36.479 user 0m0.362s 00:10:36.479 sys 0m0.119s 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:10:36.479 ************************************ 00:10:36.479 END TEST dd_bs_lt_native_bs 00:10:36.479 ************************************ 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:36.479 ************************************ 00:10:36.479 START TEST dd_rw 00:10:36.479 ************************************ 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:36.479 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:37.046 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:10:37.046 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:37.046 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:37.046 13:15:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:37.305 [2024-09-27 13:15:38.905052] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:37.305 [2024-09-27 13:15:38.905167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59617 ] 00:10:37.305 { 00:10:37.305 "subsystems": [ 00:10:37.305 { 00:10:37.305 "subsystem": "bdev", 00:10:37.305 "config": [ 00:10:37.305 { 00:10:37.305 "params": { 00:10:37.305 "trtype": "pcie", 00:10:37.305 "traddr": "0000:00:10.0", 00:10:37.305 "name": "Nvme0" 00:10:37.305 }, 00:10:37.305 "method": "bdev_nvme_attach_controller" 00:10:37.305 }, 00:10:37.305 { 00:10:37.305 "method": "bdev_wait_for_examine" 00:10:37.305 } 00:10:37.305 ] 00:10:37.305 } 00:10:37.305 ] 00:10:37.305 } 00:10:37.305 [2024-09-27 13:15:39.041665] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.305 [2024-09-27 13:15:39.099776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.305 [2024-09-27 13:15:39.129095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.563  Copying: 60/60 [kB] (average 29 MBps) 00:10:37.563 00:10:37.563 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:10:37.563 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:37.563 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:37.563 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:37.823 [2024-09-27 13:15:39.419525] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:37.823 [2024-09-27 13:15:39.419621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59636 ] 00:10:37.823 { 00:10:37.823 "subsystems": [ 00:10:37.823 { 00:10:37.823 "subsystem": "bdev", 00:10:37.823 "config": [ 00:10:37.823 { 00:10:37.823 "params": { 00:10:37.823 "trtype": "pcie", 00:10:37.823 "traddr": "0000:00:10.0", 00:10:37.823 "name": "Nvme0" 00:10:37.823 }, 00:10:37.823 "method": "bdev_nvme_attach_controller" 00:10:37.823 }, 00:10:37.823 { 00:10:37.823 "method": "bdev_wait_for_examine" 00:10:37.823 } 00:10:37.823 ] 00:10:37.823 } 00:10:37.823 ] 00:10:37.823 } 00:10:37.823 [2024-09-27 13:15:39.555674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.823 [2024-09-27 13:15:39.608785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.823 [2024-09-27 13:15:39.642034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:38.081  Copying: 60/60 [kB] (average 14 MBps) 00:10:38.081 00:10:38.081 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:38.081 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:10:38.081 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:38.081 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:38.081 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:10:38.082 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:38.082 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:38.082 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:38.082 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:38.082 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:38.082 13:15:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:38.340 { 00:10:38.340 "subsystems": [ 00:10:38.340 { 00:10:38.340 "subsystem": "bdev", 00:10:38.340 "config": [ 00:10:38.340 { 00:10:38.340 "params": { 00:10:38.340 "trtype": "pcie", 00:10:38.340 "traddr": "0000:00:10.0", 00:10:38.340 "name": "Nvme0" 00:10:38.340 }, 00:10:38.340 "method": "bdev_nvme_attach_controller" 00:10:38.340 }, 00:10:38.340 { 00:10:38.340 "method": "bdev_wait_for_examine" 00:10:38.340 } 00:10:38.340 ] 00:10:38.340 } 00:10:38.340 ] 00:10:38.340 } 00:10:38.340 [2024-09-27 13:15:39.948739] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:38.340 [2024-09-27 13:15:39.948840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59646 ] 00:10:38.340 [2024-09-27 13:15:40.082378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.340 [2024-09-27 13:15:40.166352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.656 [2024-09-27 13:15:40.203303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:38.656  Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:38.656 00:10:38.656 13:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:38.656 13:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:10:38.656 13:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:10:38.656 13:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:10:38.656 13:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:10:38.656 13:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:38.656 13:15:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:39.242 13:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:10:39.242 13:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:39.242 13:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:39.242 13:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:39.500 { 00:10:39.500 "subsystems": [ 00:10:39.500 { 00:10:39.500 "subsystem": "bdev", 00:10:39.500 "config": [ 00:10:39.500 { 00:10:39.500 "params": { 00:10:39.500 "trtype": "pcie", 00:10:39.500 "traddr": "0000:00:10.0", 00:10:39.500 "name": "Nvme0" 00:10:39.500 }, 00:10:39.500 "method": "bdev_nvme_attach_controller" 00:10:39.500 }, 00:10:39.500 { 00:10:39.500 "method": "bdev_wait_for_examine" 00:10:39.500 } 00:10:39.500 ] 00:10:39.500 } 00:10:39.500 ] 00:10:39.500 } 00:10:39.500 [2024-09-27 13:15:41.141279] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:39.500 [2024-09-27 13:15:41.141387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59665 ] 00:10:39.500 [2024-09-27 13:15:41.283712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.500 [2024-09-27 13:15:41.344574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.759 [2024-09-27 13:15:41.377293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:40.019  Copying: 60/60 [kB] (average 58 MBps) 00:10:40.019 00:10:40.019 13:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:40.019 13:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:10:40.019 13:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:40.019 13:15:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:40.019 { 00:10:40.019 "subsystems": [ 00:10:40.019 { 00:10:40.019 "subsystem": "bdev", 00:10:40.019 "config": [ 00:10:40.019 { 00:10:40.019 "params": { 00:10:40.019 "trtype": "pcie", 00:10:40.019 "traddr": "0000:00:10.0", 00:10:40.019 "name": "Nvme0" 00:10:40.019 }, 00:10:40.019 "method": "bdev_nvme_attach_controller" 00:10:40.019 }, 00:10:40.019 { 00:10:40.019 "method": "bdev_wait_for_examine" 00:10:40.019 } 00:10:40.019 ] 00:10:40.019 } 00:10:40.019 ] 00:10:40.019 } 00:10:40.019 [2024-09-27 13:15:41.684891] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:40.019 [2024-09-27 13:15:41.685019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59684 ] 00:10:40.019 [2024-09-27 13:15:41.822262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.279 [2024-09-27 13:15:41.882593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.279 [2024-09-27 13:15:41.914530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:40.539  Copying: 60/60 [kB] (average 58 MBps) 00:10:40.539 00:10:40.539 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:40.539 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:10:40.539 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:40.539 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:40.539 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:10:40.539 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:40.539 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:40.539 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:40.539 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:40.539 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:40.539 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:40.539 [2024-09-27 13:15:42.212595] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:40.539 [2024-09-27 13:15:42.212711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59699 ] 00:10:40.539 { 00:10:40.539 "subsystems": [ 00:10:40.539 { 00:10:40.539 "subsystem": "bdev", 00:10:40.539 "config": [ 00:10:40.539 { 00:10:40.539 "params": { 00:10:40.539 "trtype": "pcie", 00:10:40.539 "traddr": "0000:00:10.0", 00:10:40.539 "name": "Nvme0" 00:10:40.539 }, 00:10:40.539 "method": "bdev_nvme_attach_controller" 00:10:40.539 }, 00:10:40.539 { 00:10:40.539 "method": "bdev_wait_for_examine" 00:10:40.539 } 00:10:40.539 ] 00:10:40.539 } 00:10:40.539 ] 00:10:40.539 } 00:10:40.539 [2024-09-27 13:15:42.344049] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.798 [2024-09-27 13:15:42.399581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.798 [2024-09-27 13:15:42.430103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.062  Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:41.062 00:10:41.062 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:41.062 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:41.062 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:10:41.062 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:10:41.062 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:10:41.062 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:10:41.062 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:41.062 13:15:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:41.629 13:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:10:41.629 13:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:41.629 13:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:41.629 13:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:41.629 { 00:10:41.629 "subsystems": [ 00:10:41.629 { 00:10:41.629 "subsystem": "bdev", 00:10:41.629 "config": [ 00:10:41.629 { 00:10:41.629 "params": { 00:10:41.629 "trtype": "pcie", 00:10:41.629 "traddr": "0000:00:10.0", 00:10:41.629 "name": "Nvme0" 00:10:41.629 }, 00:10:41.629 "method": "bdev_nvme_attach_controller" 00:10:41.629 }, 00:10:41.629 { 00:10:41.629 "method": "bdev_wait_for_examine" 00:10:41.629 } 00:10:41.629 ] 00:10:41.629 } 00:10:41.629 ] 00:10:41.629 } 00:10:41.629 [2024-09-27 13:15:43.393996] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:41.629 [2024-09-27 13:15:43.394120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59724 ] 00:10:41.888 [2024-09-27 13:15:43.534040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.888 [2024-09-27 13:15:43.601781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.888 [2024-09-27 13:15:43.634047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.147  Copying: 56/56 [kB] (average 54 MBps) 00:10:42.147 00:10:42.147 13:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:10:42.147 13:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:42.147 13:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:42.147 13:15:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:42.147 { 00:10:42.147 "subsystems": [ 00:10:42.147 { 00:10:42.147 "subsystem": "bdev", 00:10:42.147 "config": [ 00:10:42.147 { 00:10:42.147 "params": { 00:10:42.147 "trtype": "pcie", 00:10:42.147 "traddr": "0000:00:10.0", 00:10:42.147 "name": "Nvme0" 00:10:42.147 }, 00:10:42.147 "method": "bdev_nvme_attach_controller" 00:10:42.147 }, 00:10:42.147 { 00:10:42.147 "method": "bdev_wait_for_examine" 00:10:42.147 } 00:10:42.147 ] 00:10:42.147 } 00:10:42.147 ] 00:10:42.147 } 00:10:42.147 [2024-09-27 13:15:43.960620] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:42.147 [2024-09-27 13:15:43.960772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59732 ] 00:10:42.406 [2024-09-27 13:15:44.100625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.406 [2024-09-27 13:15:44.164501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.406 [2024-09-27 13:15:44.196517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.665  Copying: 56/56 [kB] (average 27 MBps) 00:10:42.665 00:10:42.665 13:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:42.665 13:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:10:42.665 13:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:42.665 13:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:42.665 13:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:10:42.665 13:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:42.665 13:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:42.665 13:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:42.665 13:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:42.665 13:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:42.665 13:15:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:42.924 [2024-09-27 13:15:44.511396] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:42.924 [2024-09-27 13:15:44.511477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59753 ] 00:10:42.924 { 00:10:42.924 "subsystems": [ 00:10:42.924 { 00:10:42.924 "subsystem": "bdev", 00:10:42.924 "config": [ 00:10:42.924 { 00:10:42.924 "params": { 00:10:42.924 "trtype": "pcie", 00:10:42.924 "traddr": "0000:00:10.0", 00:10:42.924 "name": "Nvme0" 00:10:42.924 }, 00:10:42.924 "method": "bdev_nvme_attach_controller" 00:10:42.924 }, 00:10:42.924 { 00:10:42.924 "method": "bdev_wait_for_examine" 00:10:42.924 } 00:10:42.924 ] 00:10:42.924 } 00:10:42.924 ] 00:10:42.924 } 00:10:42.924 [2024-09-27 13:15:44.644425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.924 [2024-09-27 13:15:44.705597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.924 [2024-09-27 13:15:44.742047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:43.183  Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:43.183 00:10:43.183 13:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:43.183 13:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:10:43.183 13:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:10:43.183 13:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:10:43.183 13:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:10:43.183 13:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:43.183 13:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:44.120 13:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:10:44.120 13:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:44.120 13:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:44.120 13:15:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:44.120 [2024-09-27 13:15:45.672492] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:44.120 [2024-09-27 13:15:45.672606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59772 ] 00:10:44.120 { 00:10:44.120 "subsystems": [ 00:10:44.120 { 00:10:44.120 "subsystem": "bdev", 00:10:44.120 "config": [ 00:10:44.120 { 00:10:44.120 "params": { 00:10:44.120 "trtype": "pcie", 00:10:44.120 "traddr": "0000:00:10.0", 00:10:44.120 "name": "Nvme0" 00:10:44.120 }, 00:10:44.120 "method": "bdev_nvme_attach_controller" 00:10:44.120 }, 00:10:44.120 { 00:10:44.120 "method": "bdev_wait_for_examine" 00:10:44.120 } 00:10:44.120 ] 00:10:44.120 } 00:10:44.120 ] 00:10:44.120 } 00:10:44.120 [2024-09-27 13:15:45.811250] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.120 [2024-09-27 13:15:45.881044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.120 [2024-09-27 13:15:45.914007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:44.406  Copying: 56/56 [kB] (average 54 MBps) 00:10:44.406 00:10:44.406 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:10:44.406 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:44.406 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:44.406 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:44.665 { 00:10:44.665 "subsystems": [ 00:10:44.665 { 00:10:44.665 "subsystem": "bdev", 00:10:44.665 "config": [ 00:10:44.665 { 00:10:44.665 "params": { 00:10:44.665 "trtype": "pcie", 00:10:44.665 "traddr": "0000:00:10.0", 00:10:44.665 "name": "Nvme0" 00:10:44.665 }, 00:10:44.665 "method": "bdev_nvme_attach_controller" 00:10:44.665 }, 00:10:44.665 { 00:10:44.665 "method": "bdev_wait_for_examine" 00:10:44.665 } 00:10:44.665 ] 00:10:44.665 } 00:10:44.665 ] 00:10:44.665 } 00:10:44.665 [2024-09-27 13:15:46.252845] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:44.665 [2024-09-27 13:15:46.252941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59780 ] 00:10:44.665 [2024-09-27 13:15:46.394186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.665 [2024-09-27 13:15:46.466866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.665 [2024-09-27 13:15:46.501189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:44.924  Copying: 56/56 [kB] (average 54 MBps) 00:10:44.924 00:10:45.183 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:45.183 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:10:45.183 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:45.183 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:45.183 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:10:45.183 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:45.183 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:45.183 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:45.183 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:45.183 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:45.183 13:15:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:45.183 [2024-09-27 13:15:46.827742] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:45.183 [2024-09-27 13:15:46.827851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59801 ] 00:10:45.183 { 00:10:45.183 "subsystems": [ 00:10:45.183 { 00:10:45.183 "subsystem": "bdev", 00:10:45.183 "config": [ 00:10:45.183 { 00:10:45.183 "params": { 00:10:45.183 "trtype": "pcie", 00:10:45.183 "traddr": "0000:00:10.0", 00:10:45.183 "name": "Nvme0" 00:10:45.183 }, 00:10:45.183 "method": "bdev_nvme_attach_controller" 00:10:45.183 }, 00:10:45.183 { 00:10:45.183 "method": "bdev_wait_for_examine" 00:10:45.183 } 00:10:45.183 ] 00:10:45.183 } 00:10:45.183 ] 00:10:45.183 } 00:10:45.183 [2024-09-27 13:15:46.962928] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.183 [2024-09-27 13:15:47.022604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.442 [2024-09-27 13:15:47.052186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:45.701  Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:45.701 00:10:45.701 13:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:45.701 13:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:45.701 13:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:10:45.701 13:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:10:45.701 13:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:10:45.701 13:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:10:45.701 13:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:45.701 13:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:46.269 13:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:10:46.269 13:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:46.269 13:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:46.269 13:15:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:46.269 [2024-09-27 13:15:47.941644] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:46.269 [2024-09-27 13:15:47.941796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59820 ] 00:10:46.269 { 00:10:46.269 "subsystems": [ 00:10:46.269 { 00:10:46.269 "subsystem": "bdev", 00:10:46.269 "config": [ 00:10:46.269 { 00:10:46.269 "params": { 00:10:46.269 "trtype": "pcie", 00:10:46.269 "traddr": "0000:00:10.0", 00:10:46.269 "name": "Nvme0" 00:10:46.269 }, 00:10:46.269 "method": "bdev_nvme_attach_controller" 00:10:46.269 }, 00:10:46.269 { 00:10:46.269 "method": "bdev_wait_for_examine" 00:10:46.269 } 00:10:46.269 ] 00:10:46.269 } 00:10:46.269 ] 00:10:46.269 } 00:10:46.269 [2024-09-27 13:15:48.074270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.528 [2024-09-27 13:15:48.130853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.528 [2024-09-27 13:15:48.161748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:46.787  Copying: 48/48 [kB] (average 46 MBps) 00:10:46.787 00:10:46.787 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:46.787 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:10:46.787 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:46.787 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:46.787 { 00:10:46.787 "subsystems": [ 00:10:46.787 { 00:10:46.787 "subsystem": "bdev", 00:10:46.787 "config": [ 00:10:46.787 { 00:10:46.787 "params": { 00:10:46.787 "trtype": "pcie", 00:10:46.787 "traddr": "0000:00:10.0", 00:10:46.787 "name": "Nvme0" 00:10:46.787 }, 00:10:46.787 "method": "bdev_nvme_attach_controller" 00:10:46.787 }, 00:10:46.787 { 00:10:46.787 "method": "bdev_wait_for_examine" 00:10:46.787 } 00:10:46.787 ] 00:10:46.787 } 00:10:46.787 ] 00:10:46.787 } 00:10:46.787 [2024-09-27 13:15:48.478604] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:46.787 [2024-09-27 13:15:48.478793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59838 ] 00:10:46.787 [2024-09-27 13:15:48.629626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.046 [2024-09-27 13:15:48.689888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.046 [2024-09-27 13:15:48.719813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.306  Copying: 48/48 [kB] (average 46 MBps) 00:10:47.306 00:10:47.306 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:47.306 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:10:47.306 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:47.306 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:47.306 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:10:47.306 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:47.306 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:47.306 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:47.306 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:47.306 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:47.306 13:15:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:47.306 [2024-09-27 13:15:49.031976] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:47.306 [2024-09-27 13:15:49.032078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59849 ] 00:10:47.306 { 00:10:47.306 "subsystems": [ 00:10:47.306 { 00:10:47.306 "subsystem": "bdev", 00:10:47.306 "config": [ 00:10:47.306 { 00:10:47.306 "params": { 00:10:47.306 "trtype": "pcie", 00:10:47.306 "traddr": "0000:00:10.0", 00:10:47.306 "name": "Nvme0" 00:10:47.306 }, 00:10:47.306 "method": "bdev_nvme_attach_controller" 00:10:47.306 }, 00:10:47.306 { 00:10:47.306 "method": "bdev_wait_for_examine" 00:10:47.306 } 00:10:47.306 ] 00:10:47.306 } 00:10:47.306 ] 00:10:47.306 } 00:10:47.567 [2024-09-27 13:15:49.169466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.567 [2024-09-27 13:15:49.228943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.567 [2024-09-27 13:15:49.259534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.826  Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:47.826 00:10:47.826 13:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:47.826 13:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:10:47.826 13:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:10:47.826 13:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:10:47.826 13:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:10:47.826 13:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:47.826 13:15:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:48.393 13:15:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:10:48.393 13:15:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:48.393 13:15:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:48.393 13:15:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:48.393 [2024-09-27 13:15:50.160358] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:48.393 [2024-09-27 13:15:50.160472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59868 ] 00:10:48.393 { 00:10:48.393 "subsystems": [ 00:10:48.393 { 00:10:48.393 "subsystem": "bdev", 00:10:48.393 "config": [ 00:10:48.393 { 00:10:48.393 "params": { 00:10:48.393 "trtype": "pcie", 00:10:48.393 "traddr": "0000:00:10.0", 00:10:48.393 "name": "Nvme0" 00:10:48.393 }, 00:10:48.393 "method": "bdev_nvme_attach_controller" 00:10:48.393 }, 00:10:48.393 { 00:10:48.393 "method": "bdev_wait_for_examine" 00:10:48.393 } 00:10:48.393 ] 00:10:48.393 } 00:10:48.393 ] 00:10:48.393 } 00:10:48.651 [2024-09-27 13:15:50.296963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.651 [2024-09-27 13:15:50.359724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.651 [2024-09-27 13:15:50.391443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.910  Copying: 48/48 [kB] (average 46 MBps) 00:10:48.910 00:10:48.910 13:15:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:48.910 13:15:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:10:48.910 13:15:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:48.910 13:15:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:48.910 { 00:10:48.910 "subsystems": [ 00:10:48.910 { 00:10:48.910 "subsystem": "bdev", 00:10:48.910 "config": [ 00:10:48.910 { 00:10:48.910 "params": { 00:10:48.910 "trtype": "pcie", 00:10:48.910 "traddr": "0000:00:10.0", 00:10:48.910 "name": "Nvme0" 00:10:48.910 }, 00:10:48.910 "method": "bdev_nvme_attach_controller" 00:10:48.910 }, 00:10:48.910 { 00:10:48.910 "method": "bdev_wait_for_examine" 00:10:48.910 } 00:10:48.910 ] 00:10:48.910 } 00:10:48.910 ] 00:10:48.910 } 00:10:48.910 [2024-09-27 13:15:50.699676] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:48.910 [2024-09-27 13:15:50.699798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59887 ] 00:10:49.168 [2024-09-27 13:15:50.838540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.168 [2024-09-27 13:15:50.899465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.168 [2024-09-27 13:15:50.928783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.428  Copying: 48/48 [kB] (average 46 MBps) 00:10:49.428 00:10:49.428 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:49.428 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:10:49.428 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:49.428 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:49.428 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:10:49.428 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:49.428 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:49.428 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:49.428 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:49.428 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:49.428 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:49.428 [2024-09-27 13:15:51.229241] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:49.428 [2024-09-27 13:15:51.229321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59897 ] 00:10:49.428 { 00:10:49.428 "subsystems": [ 00:10:49.428 { 00:10:49.428 "subsystem": "bdev", 00:10:49.428 "config": [ 00:10:49.428 { 00:10:49.428 "params": { 00:10:49.428 "trtype": "pcie", 00:10:49.428 "traddr": "0000:00:10.0", 00:10:49.428 "name": "Nvme0" 00:10:49.428 }, 00:10:49.428 "method": "bdev_nvme_attach_controller" 00:10:49.428 }, 00:10:49.428 { 00:10:49.428 "method": "bdev_wait_for_examine" 00:10:49.428 } 00:10:49.428 ] 00:10:49.428 } 00:10:49.428 ] 00:10:49.428 } 00:10:49.719 [2024-09-27 13:15:51.362091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.719 [2024-09-27 13:15:51.423811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.719 [2024-09-27 13:15:51.454131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.979  Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:49.979 00:10:49.979 00:10:49.979 real 0m13.490s 00:10:49.979 user 0m10.307s 00:10:49.979 sys 0m3.859s 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:49.979 ************************************ 00:10:49.979 END TEST dd_rw 00:10:49.979 ************************************ 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:49.979 ************************************ 00:10:49.979 START TEST dd_rw_offset 00:10:49.979 ************************************ 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=xoslctbqc7i8m15qa98n26dkcf4l399ic2zfeeovw9rlltmehe64m1yc4sau04ae0523pmqm92yrx2jyvcqrwiacbvysmm4w1lt590hu4celxy9nppoomzipr2p52va549ulkazyv5t54jj6d945i2turvgh15ah68hdmqxqiz1nffjvtvi0wqi5asukj1is9o6kxqz4vnudx0ts0uryv8z3xwzd0e4govuww9kcghs2skgcavhvgt2hbtwhjy884yrbe2ozlnxtw9pz191zxb2xe7clbiae71f57y4qj0jwt3k1abc0tnqbmk3rp7u8dquafadh1tzezhifwqg7i6dbiyoo4g6egdrsp57oh9tj7ipttdutt0vws6o94c46pbesslqhrqb0fukme4dagwpagjqxpuu6rlzoe0srgb8d1ugdwhbxswfddv7xsdyi6i4g5hoatcb1t7ke873y3hwzjldufv1a7vo1xjyy42dvle490f67t926op6yb49e10jl6vwqrj3f0pp9lkrx8aqf7iuz84j44oqkyblkfedwmfiamq475equ4q1txyvxt2lkpklufgm688nw7esp5355ee8fkr6s6f2kyqlkfjikb1girdakeyneryfelz1973ss72d5htpyo1vktdsqss35eaq7cqbh44tc7cur9t6gcd5s8vmpegf3euna0yhiyn7x8mjme10zjt51p6gb1afpllkocvyd644hlad62v8wssi7quc6440no6d0qbggmx5mu7nxn3b4p045gbllhj3tiz94c31ee53k3qfxx5d4055n0s22bv80a3lfvznpnjqmqwbux23jo35nvcbrvgw9ytqbr3dref4hh4ewjr7owumgu7dfiksmn0qc7a5hoe0pmbgcv3kvd2orcafkn1c1p3vuhaq750kxu0cduvnx6iqty9vvncur0ufc7xncymhn69wsi8ssxx6zb4ko3spcfiltuhlae84wsrzbmha90252dsvz426ewpjpzhljqm13wck0unpwifg1cffrndhonfzxgph5460a52p8cbl82v3tgmn3azcjs3dmhm6vrhk30ufid1a7w0jufvqdivksda9j31wpqr9d2r8k2pl4rewwv4n5prvjiecqgfqw5qpfcy0jc8v74k4roxwc39jis2yjtsg1t3l2ecz63agljkwgqqh11ug7wri3te7hyjzseb8mv0y53sr7atraxktlxt2ulyffk5e1xb2h94gooo7s2orlhyb6a09eia1m7p9q0f5r1hftravnl098vb982gjq3nk78d2exhz9spa4ntn0lnttkqusayy7spdbthteddtkzfwc0hxo8gknzowtf5fsf65st546vh7gpkrmp6x4arbbwcvz4fkuk18iev9vbdxw8k9cj8l62d8darcslwpwqas5van2hiaou0zrj181sxoiwngetncq9etsop2ae5dupq6k5l19okc9x2bp0lzwzuy7id83emjjw04kpiq4kwymsgpftscylpq6mlpu612x1sag7isdooo6awmntg25fj8ru95d6ckbmbrl1bdvsw4r6rns32b6ncgpsq5x5xs3njyojlb7wv5pg2beqfv00lbu4azmilaem92wrt8cn7295otk715xi1cja3kc0oofiawnbgodvmym231h6hliv0xn3ew0e39sd6e2mfr7f8n832fsto8pk9xzq2kh2jvdrdip0vjrhqcii9saljcpghxgxfrr2tymwa5v9ugwzpx4zzbt1v22xaelj9w7hlzo90vt21jkmh67d609dek232n3niu2pp2c3875vi6fmfhn6zua84ee18c0q5no7fr9o0vvdw8ap2406pnybcqc2pa3kgo6qp7yv4d11wq4kots7ylmp1zb8wqvhm1iieg1sfi0f9lgfyvrhz56q0emcq3xhpllwqdivaucejf7hd7emfuqnr9h5g6x3bwxfszmhckpuursyqdqa0f5ha94xpfxoqbwnrxoy8l7kf0zc8r92tiaur2aviqnx70dmbv222n5geknk9775iqsmckp3ed2uydvqthgeaer2nqfdmnfrry0t5cp2478l7ci1s9a4wpgjq8p9afhavo9ah5z5zzmpqo4vv9emjdfesmnsni5cbvftuts809zpphqoqu8t8jt3p57j828ueurm8yfhxl4ccfm7h593oj84ssylugc5e9987vwkdnkzyzipsnoxlpa8dlrm7qoa4drdu0mhex0tfpqg3juisfs3du8619lzzy795e04lyqo83nognqfp20wm3ufyygudxhzvzsr3et048srzxvlyimno7k3m9zuwytov4nptle1jezncur6pz3w5uaseszj4vcme6l2l8qo0y9laiosw0e6ocn4g242w0yqsp5bre43yqtojzdq5qeas77dnkevhjjazof23wfth3rlh4mcqrfzauxmm5ie1btnzcua1jshyueh428m2a8zq3vym0cjyesv35p3675g3jxli5jb33fga090kge3ppdb25bvk1cj6mn3oxsu4icfdt12a1xi8jeyonxbb5fjlzoy87xx1nmn8ztvch4164rflp0a66aq26yr1l3yr5um14m62gzvm1idvav4yui8njzly3qrmao2a4s1gtwdiue7nakrsvttxkbuzge3a4424g9k4k2p6mohe4lpg4y7nzbja1zx6h1i2pfpuuf8zqqflmj1y9mufx889lfvq4qic6kt7o3i6cwl2nzevj1l01a2bckotphtgkhjeyzyxw0jg8f11j6sokk5t8cx1lah480si18acxs70mvnjtnyxoh0s8yhcfpnsebtyp1qtk2p7rzjsq62ewhypn0mjk68glcwne5z1ouhew90vj4l23dforxt165m7bbj3r4j8n376mo8aevzhbnboso47hhnhfone4fxui6u1tepgvps6m19q8pmlvjh4hkl63nwzm6m87stwnalec2l0fzqx30rq351uqax9ztrv3pfy1uq7d3cm9famdlorkq2kza5snkuilqxp6waf5cwap1rt8vuors5ptamjcfjlh090nn66omj0n39c2vmhlk3d59vq1oh5522doy18q1xwb95kf945t4y9dvq9pwibr5ggyiaej4g76atbci9fr7lo86dyafdl0yfvuaqpqd0a9qxi0cu8jcxgqyyf706a7k9aktbdxf34zokwtba80mgx6e5baqggrw7la2pob976nedtvhjhpat3yvozy4piatbb26bclr7q42yv37trxj1wecefe7f6ljypu8m6icoyyvecvwxejv485m5l71cwgimui47595mbyz5o4stokggvg1rs7vo834vsac98m46qf35ozar45737hbc0hpeqwscuyh82eujkn4n6b3iqi77wlpw45d78gdadomr5el9i6yxvavqjs4ubik4iflvpjrn18b0bleolq590bh9wo11izsni9li8bbjebhb2jnojq9ynnfmzu8tlpxjvku5b2b7ao2jg16mxke4bcxei3qrm84q3pzphia6s3wnqf64kmir7m8kmt56xdrrduhdozjvwqge4u3c0s5am3xues9kint9pedx4v1pi95s8z1hr3gwc03iixj854fk8lv3ii1mjreqog22csdhe0k5qpytvum0y1gfpltu6feyt08bq4m0x4oncclov59eorxgxryhd2xbdq1o3ndv3ky3f3b8wju3u271mfyf2scvnxnn20x99f1s5sv4etbxpljc4bdnkgpaccfwciy3kiw1zqn4y629m7i9y1qsd9851f7cn2vilx9qag0b11upq6wnj2wunzxjjkbosvdkh52fij47ypeo7ce4mce0mn3ss3goq1w6bw9s59tydsviv2lqznlnv3ygp943h3xjcsokzgvk23e4wvevan483mbnc4z52r75f1b23mpmzuj6a7h9agtierlbybqdddraugv1yeb066h0siygwrvoe5wmvovdqzdic7voxkis3tx4iqadrb1n1kxz76inakmyuhas3dgvaqy5fzu1y24j7u7 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:10:49.979 13:15:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:50.239 [2024-09-27 13:15:51.881633] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:50.239 [2024-09-27 13:15:51.881776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59933 ] 00:10:50.239 { 00:10:50.239 "subsystems": [ 00:10:50.239 { 00:10:50.239 "subsystem": "bdev", 00:10:50.239 "config": [ 00:10:50.239 { 00:10:50.239 "params": { 00:10:50.239 "trtype": "pcie", 00:10:50.239 "traddr": "0000:00:10.0", 00:10:50.239 "name": "Nvme0" 00:10:50.239 }, 00:10:50.239 "method": "bdev_nvme_attach_controller" 00:10:50.239 }, 00:10:50.239 { 00:10:50.239 "method": "bdev_wait_for_examine" 00:10:50.239 } 00:10:50.239 ] 00:10:50.239 } 00:10:50.239 ] 00:10:50.239 } 00:10:50.239 [2024-09-27 13:15:52.020981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.239 [2024-09-27 13:15:52.072467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.495 [2024-09-27 13:15:52.101380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:50.495  Copying: 4096/4096 [B] (average 4000 kBps) 00:10:50.495 00:10:50.752 13:15:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:10:50.752 13:15:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:10:50.752 13:15:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:10:50.752 13:15:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:50.752 [2024-09-27 13:15:52.397976] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:50.753 [2024-09-27 13:15:52.398090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59941 ] 00:10:50.753 { 00:10:50.753 "subsystems": [ 00:10:50.753 { 00:10:50.753 "subsystem": "bdev", 00:10:50.753 "config": [ 00:10:50.753 { 00:10:50.753 "params": { 00:10:50.753 "trtype": "pcie", 00:10:50.753 "traddr": "0000:00:10.0", 00:10:50.753 "name": "Nvme0" 00:10:50.753 }, 00:10:50.753 "method": "bdev_nvme_attach_controller" 00:10:50.753 }, 00:10:50.753 { 00:10:50.753 "method": "bdev_wait_for_examine" 00:10:50.753 } 00:10:50.753 ] 00:10:50.753 } 00:10:50.753 ] 00:10:50.753 } 00:10:50.753 [2024-09-27 13:15:52.536845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.753 [2024-09-27 13:15:52.595412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.010 [2024-09-27 13:15:52.626486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.269  Copying: 4096/4096 [B] (average 4000 kBps) 00:10:51.269 00:10:51.269 13:15:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ xoslctbqc7i8m15qa98n26dkcf4l399ic2zfeeovw9rlltmehe64m1yc4sau04ae0523pmqm92yrx2jyvcqrwiacbvysmm4w1lt590hu4celxy9nppoomzipr2p52va549ulkazyv5t54jj6d945i2turvgh15ah68hdmqxqiz1nffjvtvi0wqi5asukj1is9o6kxqz4vnudx0ts0uryv8z3xwzd0e4govuww9kcghs2skgcavhvgt2hbtwhjy884yrbe2ozlnxtw9pz191zxb2xe7clbiae71f57y4qj0jwt3k1abc0tnqbmk3rp7u8dquafadh1tzezhifwqg7i6dbiyoo4g6egdrsp57oh9tj7ipttdutt0vws6o94c46pbesslqhrqb0fukme4dagwpagjqxpuu6rlzoe0srgb8d1ugdwhbxswfddv7xsdyi6i4g5hoatcb1t7ke873y3hwzjldufv1a7vo1xjyy42dvle490f67t926op6yb49e10jl6vwqrj3f0pp9lkrx8aqf7iuz84j44oqkyblkfedwmfiamq475equ4q1txyvxt2lkpklufgm688nw7esp5355ee8fkr6s6f2kyqlkfjikb1girdakeyneryfelz1973ss72d5htpyo1vktdsqss35eaq7cqbh44tc7cur9t6gcd5s8vmpegf3euna0yhiyn7x8mjme10zjt51p6gb1afpllkocvyd644hlad62v8wssi7quc6440no6d0qbggmx5mu7nxn3b4p045gbllhj3tiz94c31ee53k3qfxx5d4055n0s22bv80a3lfvznpnjqmqwbux23jo35nvcbrvgw9ytqbr3dref4hh4ewjr7owumgu7dfiksmn0qc7a5hoe0pmbgcv3kvd2orcafkn1c1p3vuhaq750kxu0cduvnx6iqty9vvncur0ufc7xncymhn69wsi8ssxx6zb4ko3spcfiltuhlae84wsrzbmha90252dsvz426ewpjpzhljqm13wck0unpwifg1cffrndhonfzxgph5460a52p8cbl82v3tgmn3azcjs3dmhm6vrhk30ufid1a7w0jufvqdivksda9j31wpqr9d2r8k2pl4rewwv4n5prvjiecqgfqw5qpfcy0jc8v74k4roxwc39jis2yjtsg1t3l2ecz63agljkwgqqh11ug7wri3te7hyjzseb8mv0y53sr7atraxktlxt2ulyffk5e1xb2h94gooo7s2orlhyb6a09eia1m7p9q0f5r1hftravnl098vb982gjq3nk78d2exhz9spa4ntn0lnttkqusayy7spdbthteddtkzfwc0hxo8gknzowtf5fsf65st546vh7gpkrmp6x4arbbwcvz4fkuk18iev9vbdxw8k9cj8l62d8darcslwpwqas5van2hiaou0zrj181sxoiwngetncq9etsop2ae5dupq6k5l19okc9x2bp0lzwzuy7id83emjjw04kpiq4kwymsgpftscylpq6mlpu612x1sag7isdooo6awmntg25fj8ru95d6ckbmbrl1bdvsw4r6rns32b6ncgpsq5x5xs3njyojlb7wv5pg2beqfv00lbu4azmilaem92wrt8cn7295otk715xi1cja3kc0oofiawnbgodvmym231h6hliv0xn3ew0e39sd6e2mfr7f8n832fsto8pk9xzq2kh2jvdrdip0vjrhqcii9saljcpghxgxfrr2tymwa5v9ugwzpx4zzbt1v22xaelj9w7hlzo90vt21jkmh67d609dek232n3niu2pp2c3875vi6fmfhn6zua84ee18c0q5no7fr9o0vvdw8ap2406pnybcqc2pa3kgo6qp7yv4d11wq4kots7ylmp1zb8wqvhm1iieg1sfi0f9lgfyvrhz56q0emcq3xhpllwqdivaucejf7hd7emfuqnr9h5g6x3bwxfszmhckpuursyqdqa0f5ha94xpfxoqbwnrxoy8l7kf0zc8r92tiaur2aviqnx70dmbv222n5geknk9775iqsmckp3ed2uydvqthgeaer2nqfdmnfrry0t5cp2478l7ci1s9a4wpgjq8p9afhavo9ah5z5zzmpqo4vv9emjdfesmnsni5cbvftuts809zpphqoqu8t8jt3p57j828ueurm8yfhxl4ccfm7h593oj84ssylugc5e9987vwkdnkzyzipsnoxlpa8dlrm7qoa4drdu0mhex0tfpqg3juisfs3du8619lzzy795e04lyqo83nognqfp20wm3ufyygudxhzvzsr3et048srzxvlyimno7k3m9zuwytov4nptle1jezncur6pz3w5uaseszj4vcme6l2l8qo0y9laiosw0e6ocn4g242w0yqsp5bre43yqtojzdq5qeas77dnkevhjjazof23wfth3rlh4mcqrfzauxmm5ie1btnzcua1jshyueh428m2a8zq3vym0cjyesv35p3675g3jxli5jb33fga090kge3ppdb25bvk1cj6mn3oxsu4icfdt12a1xi8jeyonxbb5fjlzoy87xx1nmn8ztvch4164rflp0a66aq26yr1l3yr5um14m62gzvm1idvav4yui8njzly3qrmao2a4s1gtwdiue7nakrsvttxkbuzge3a4424g9k4k2p6mohe4lpg4y7nzbja1zx6h1i2pfpuuf8zqqflmj1y9mufx889lfvq4qic6kt7o3i6cwl2nzevj1l01a2bckotphtgkhjeyzyxw0jg8f11j6sokk5t8cx1lah480si18acxs70mvnjtnyxoh0s8yhcfpnsebtyp1qtk2p7rzjsq62ewhypn0mjk68glcwne5z1ouhew90vj4l23dforxt165m7bbj3r4j8n376mo8aevzhbnboso47hhnhfone4fxui6u1tepgvps6m19q8pmlvjh4hkl63nwzm6m87stwnalec2l0fzqx30rq351uqax9ztrv3pfy1uq7d3cm9famdlorkq2kza5snkuilqxp6waf5cwap1rt8vuors5ptamjcfjlh090nn66omj0n39c2vmhlk3d59vq1oh5522doy18q1xwb95kf945t4y9dvq9pwibr5ggyiaej4g76atbci9fr7lo86dyafdl0yfvuaqpqd0a9qxi0cu8jcxgqyyf706a7k9aktbdxf34zokwtba80mgx6e5baqggrw7la2pob976nedtvhjhpat3yvozy4piatbb26bclr7q42yv37trxj1wecefe7f6ljypu8m6icoyyvecvwxejv485m5l71cwgimui47595mbyz5o4stokggvg1rs7vo834vsac98m46qf35ozar45737hbc0hpeqwscuyh82eujkn4n6b3iqi77wlpw45d78gdadomr5el9i6yxvavqjs4ubik4iflvpjrn18b0bleolq590bh9wo11izsni9li8bbjebhb2jnojq9ynnfmzu8tlpxjvku5b2b7ao2jg16mxke4bcxei3qrm84q3pzphia6s3wnqf64kmir7m8kmt56xdrrduhdozjvwqge4u3c0s5am3xues9kint9pedx4v1pi95s8z1hr3gwc03iixj854fk8lv3ii1mjreqog22csdhe0k5qpytvum0y1gfpltu6feyt08bq4m0x4oncclov59eorxgxryhd2xbdq1o3ndv3ky3f3b8wju3u271mfyf2scvnxnn20x99f1s5sv4etbxpljc4bdnkgpaccfwciy3kiw1zqn4y629m7i9y1qsd9851f7cn2vilx9qag0b11upq6wnj2wunzxjjkbosvdkh52fij47ypeo7ce4mce0mn3ss3goq1w6bw9s59tydsviv2lqznlnv3ygp943h3xjcsokzgvk23e4wvevan483mbnc4z52r75f1b23mpmzuj6a7h9agtierlbybqdddraugv1yeb066h0siygwrvoe5wmvovdqzdic7voxkis3tx4iqadrb1n1kxz76inakmyuhas3dgvaqy5fzu1y24j7u7 == \x\o\s\l\c\t\b\q\c\7\i\8\m\1\5\q\a\9\8\n\2\6\d\k\c\f\4\l\3\9\9\i\c\2\z\f\e\e\o\v\w\9\r\l\l\t\m\e\h\e\6\4\m\1\y\c\4\s\a\u\0\4\a\e\0\5\2\3\p\m\q\m\9\2\y\r\x\2\j\y\v\c\q\r\w\i\a\c\b\v\y\s\m\m\4\w\1\l\t\5\9\0\h\u\4\c\e\l\x\y\9\n\p\p\o\o\m\z\i\p\r\2\p\5\2\v\a\5\4\9\u\l\k\a\z\y\v\5\t\5\4\j\j\6\d\9\4\5\i\2\t\u\r\v\g\h\1\5\a\h\6\8\h\d\m\q\x\q\i\z\1\n\f\f\j\v\t\v\i\0\w\q\i\5\a\s\u\k\j\1\i\s\9\o\6\k\x\q\z\4\v\n\u\d\x\0\t\s\0\u\r\y\v\8\z\3\x\w\z\d\0\e\4\g\o\v\u\w\w\9\k\c\g\h\s\2\s\k\g\c\a\v\h\v\g\t\2\h\b\t\w\h\j\y\8\8\4\y\r\b\e\2\o\z\l\n\x\t\w\9\p\z\1\9\1\z\x\b\2\x\e\7\c\l\b\i\a\e\7\1\f\5\7\y\4\q\j\0\j\w\t\3\k\1\a\b\c\0\t\n\q\b\m\k\3\r\p\7\u\8\d\q\u\a\f\a\d\h\1\t\z\e\z\h\i\f\w\q\g\7\i\6\d\b\i\y\o\o\4\g\6\e\g\d\r\s\p\5\7\o\h\9\t\j\7\i\p\t\t\d\u\t\t\0\v\w\s\6\o\9\4\c\4\6\p\b\e\s\s\l\q\h\r\q\b\0\f\u\k\m\e\4\d\a\g\w\p\a\g\j\q\x\p\u\u\6\r\l\z\o\e\0\s\r\g\b\8\d\1\u\g\d\w\h\b\x\s\w\f\d\d\v\7\x\s\d\y\i\6\i\4\g\5\h\o\a\t\c\b\1\t\7\k\e\8\7\3\y\3\h\w\z\j\l\d\u\f\v\1\a\7\v\o\1\x\j\y\y\4\2\d\v\l\e\4\9\0\f\6\7\t\9\2\6\o\p\6\y\b\4\9\e\1\0\j\l\6\v\w\q\r\j\3\f\0\p\p\9\l\k\r\x\8\a\q\f\7\i\u\z\8\4\j\4\4\o\q\k\y\b\l\k\f\e\d\w\m\f\i\a\m\q\4\7\5\e\q\u\4\q\1\t\x\y\v\x\t\2\l\k\p\k\l\u\f\g\m\6\8\8\n\w\7\e\s\p\5\3\5\5\e\e\8\f\k\r\6\s\6\f\2\k\y\q\l\k\f\j\i\k\b\1\g\i\r\d\a\k\e\y\n\e\r\y\f\e\l\z\1\9\7\3\s\s\7\2\d\5\h\t\p\y\o\1\v\k\t\d\s\q\s\s\3\5\e\a\q\7\c\q\b\h\4\4\t\c\7\c\u\r\9\t\6\g\c\d\5\s\8\v\m\p\e\g\f\3\e\u\n\a\0\y\h\i\y\n\7\x\8\m\j\m\e\1\0\z\j\t\5\1\p\6\g\b\1\a\f\p\l\l\k\o\c\v\y\d\6\4\4\h\l\a\d\6\2\v\8\w\s\s\i\7\q\u\c\6\4\4\0\n\o\6\d\0\q\b\g\g\m\x\5\m\u\7\n\x\n\3\b\4\p\0\4\5\g\b\l\l\h\j\3\t\i\z\9\4\c\3\1\e\e\5\3\k\3\q\f\x\x\5\d\4\0\5\5\n\0\s\2\2\b\v\8\0\a\3\l\f\v\z\n\p\n\j\q\m\q\w\b\u\x\2\3\j\o\3\5\n\v\c\b\r\v\g\w\9\y\t\q\b\r\3\d\r\e\f\4\h\h\4\e\w\j\r\7\o\w\u\m\g\u\7\d\f\i\k\s\m\n\0\q\c\7\a\5\h\o\e\0\p\m\b\g\c\v\3\k\v\d\2\o\r\c\a\f\k\n\1\c\1\p\3\v\u\h\a\q\7\5\0\k\x\u\0\c\d\u\v\n\x\6\i\q\t\y\9\v\v\n\c\u\r\0\u\f\c\7\x\n\c\y\m\h\n\6\9\w\s\i\8\s\s\x\x\6\z\b\4\k\o\3\s\p\c\f\i\l\t\u\h\l\a\e\8\4\w\s\r\z\b\m\h\a\9\0\2\5\2\d\s\v\z\4\2\6\e\w\p\j\p\z\h\l\j\q\m\1\3\w\c\k\0\u\n\p\w\i\f\g\1\c\f\f\r\n\d\h\o\n\f\z\x\g\p\h\5\4\6\0\a\5\2\p\8\c\b\l\8\2\v\3\t\g\m\n\3\a\z\c\j\s\3\d\m\h\m\6\v\r\h\k\3\0\u\f\i\d\1\a\7\w\0\j\u\f\v\q\d\i\v\k\s\d\a\9\j\3\1\w\p\q\r\9\d\2\r\8\k\2\p\l\4\r\e\w\w\v\4\n\5\p\r\v\j\i\e\c\q\g\f\q\w\5\q\p\f\c\y\0\j\c\8\v\7\4\k\4\r\o\x\w\c\3\9\j\i\s\2\y\j\t\s\g\1\t\3\l\2\e\c\z\6\3\a\g\l\j\k\w\g\q\q\h\1\1\u\g\7\w\r\i\3\t\e\7\h\y\j\z\s\e\b\8\m\v\0\y\5\3\s\r\7\a\t\r\a\x\k\t\l\x\t\2\u\l\y\f\f\k\5\e\1\x\b\2\h\9\4\g\o\o\o\7\s\2\o\r\l\h\y\b\6\a\0\9\e\i\a\1\m\7\p\9\q\0\f\5\r\1\h\f\t\r\a\v\n\l\0\9\8\v\b\9\8\2\g\j\q\3\n\k\7\8\d\2\e\x\h\z\9\s\p\a\4\n\t\n\0\l\n\t\t\k\q\u\s\a\y\y\7\s\p\d\b\t\h\t\e\d\d\t\k\z\f\w\c\0\h\x\o\8\g\k\n\z\o\w\t\f\5\f\s\f\6\5\s\t\5\4\6\v\h\7\g\p\k\r\m\p\6\x\4\a\r\b\b\w\c\v\z\4\f\k\u\k\1\8\i\e\v\9\v\b\d\x\w\8\k\9\c\j\8\l\6\2\d\8\d\a\r\c\s\l\w\p\w\q\a\s\5\v\a\n\2\h\i\a\o\u\0\z\r\j\1\8\1\s\x\o\i\w\n\g\e\t\n\c\q\9\e\t\s\o\p\2\a\e\5\d\u\p\q\6\k\5\l\1\9\o\k\c\9\x\2\b\p\0\l\z\w\z\u\y\7\i\d\8\3\e\m\j\j\w\0\4\k\p\i\q\4\k\w\y\m\s\g\p\f\t\s\c\y\l\p\q\6\m\l\p\u\6\1\2\x\1\s\a\g\7\i\s\d\o\o\o\6\a\w\m\n\t\g\2\5\f\j\8\r\u\9\5\d\6\c\k\b\m\b\r\l\1\b\d\v\s\w\4\r\6\r\n\s\3\2\b\6\n\c\g\p\s\q\5\x\5\x\s\3\n\j\y\o\j\l\b\7\w\v\5\p\g\2\b\e\q\f\v\0\0\l\b\u\4\a\z\m\i\l\a\e\m\9\2\w\r\t\8\c\n\7\2\9\5\o\t\k\7\1\5\x\i\1\c\j\a\3\k\c\0\o\o\f\i\a\w\n\b\g\o\d\v\m\y\m\2\3\1\h\6\h\l\i\v\0\x\n\3\e\w\0\e\3\9\s\d\6\e\2\m\f\r\7\f\8\n\8\3\2\f\s\t\o\8\p\k\9\x\z\q\2\k\h\2\j\v\d\r\d\i\p\0\v\j\r\h\q\c\i\i\9\s\a\l\j\c\p\g\h\x\g\x\f\r\r\2\t\y\m\w\a\5\v\9\u\g\w\z\p\x\4\z\z\b\t\1\v\2\2\x\a\e\l\j\9\w\7\h\l\z\o\9\0\v\t\2\1\j\k\m\h\6\7\d\6\0\9\d\e\k\2\3\2\n\3\n\i\u\2\p\p\2\c\3\8\7\5\v\i\6\f\m\f\h\n\6\z\u\a\8\4\e\e\1\8\c\0\q\5\n\o\7\f\r\9\o\0\v\v\d\w\8\a\p\2\4\0\6\p\n\y\b\c\q\c\2\p\a\3\k\g\o\6\q\p\7\y\v\4\d\1\1\w\q\4\k\o\t\s\7\y\l\m\p\1\z\b\8\w\q\v\h\m\1\i\i\e\g\1\s\f\i\0\f\9\l\g\f\y\v\r\h\z\5\6\q\0\e\m\c\q\3\x\h\p\l\l\w\q\d\i\v\a\u\c\e\j\f\7\h\d\7\e\m\f\u\q\n\r\9\h\5\g\6\x\3\b\w\x\f\s\z\m\h\c\k\p\u\u\r\s\y\q\d\q\a\0\f\5\h\a\9\4\x\p\f\x\o\q\b\w\n\r\x\o\y\8\l\7\k\f\0\z\c\8\r\9\2\t\i\a\u\r\2\a\v\i\q\n\x\7\0\d\m\b\v\2\2\2\n\5\g\e\k\n\k\9\7\7\5\i\q\s\m\c\k\p\3\e\d\2\u\y\d\v\q\t\h\g\e\a\e\r\2\n\q\f\d\m\n\f\r\r\y\0\t\5\c\p\2\4\7\8\l\7\c\i\1\s\9\a\4\w\p\g\j\q\8\p\9\a\f\h\a\v\o\9\a\h\5\z\5\z\z\m\p\q\o\4\v\v\9\e\m\j\d\f\e\s\m\n\s\n\i\5\c\b\v\f\t\u\t\s\8\0\9\z\p\p\h\q\o\q\u\8\t\8\j\t\3\p\5\7\j\8\2\8\u\e\u\r\m\8\y\f\h\x\l\4\c\c\f\m\7\h\5\9\3\o\j\8\4\s\s\y\l\u\g\c\5\e\9\9\8\7\v\w\k\d\n\k\z\y\z\i\p\s\n\o\x\l\p\a\8\d\l\r\m\7\q\o\a\4\d\r\d\u\0\m\h\e\x\0\t\f\p\q\g\3\j\u\i\s\f\s\3\d\u\8\6\1\9\l\z\z\y\7\9\5\e\0\4\l\y\q\o\8\3\n\o\g\n\q\f\p\2\0\w\m\3\u\f\y\y\g\u\d\x\h\z\v\z\s\r\3\e\t\0\4\8\s\r\z\x\v\l\y\i\m\n\o\7\k\3\m\9\z\u\w\y\t\o\v\4\n\p\t\l\e\1\j\e\z\n\c\u\r\6\p\z\3\w\5\u\a\s\e\s\z\j\4\v\c\m\e\6\l\2\l\8\q\o\0\y\9\l\a\i\o\s\w\0\e\6\o\c\n\4\g\2\4\2\w\0\y\q\s\p\5\b\r\e\4\3\y\q\t\o\j\z\d\q\5\q\e\a\s\7\7\d\n\k\e\v\h\j\j\a\z\o\f\2\3\w\f\t\h\3\r\l\h\4\m\c\q\r\f\z\a\u\x\m\m\5\i\e\1\b\t\n\z\c\u\a\1\j\s\h\y\u\e\h\4\2\8\m\2\a\8\z\q\3\v\y\m\0\c\j\y\e\s\v\3\5\p\3\6\7\5\g\3\j\x\l\i\5\j\b\3\3\f\g\a\0\9\0\k\g\e\3\p\p\d\b\2\5\b\v\k\1\c\j\6\m\n\3\o\x\s\u\4\i\c\f\d\t\1\2\a\1\x\i\8\j\e\y\o\n\x\b\b\5\f\j\l\z\o\y\8\7\x\x\1\n\m\n\8\z\t\v\c\h\4\1\6\4\r\f\l\p\0\a\6\6\a\q\2\6\y\r\1\l\3\y\r\5\u\m\1\4\m\6\2\g\z\v\m\1\i\d\v\a\v\4\y\u\i\8\n\j\z\l\y\3\q\r\m\a\o\2\a\4\s\1\g\t\w\d\i\u\e\7\n\a\k\r\s\v\t\t\x\k\b\u\z\g\e\3\a\4\4\2\4\g\9\k\4\k\2\p\6\m\o\h\e\4\l\p\g\4\y\7\n\z\b\j\a\1\z\x\6\h\1\i\2\p\f\p\u\u\f\8\z\q\q\f\l\m\j\1\y\9\m\u\f\x\8\8\9\l\f\v\q\4\q\i\c\6\k\t\7\o\3\i\6\c\w\l\2\n\z\e\v\j\1\l\0\1\a\2\b\c\k\o\t\p\h\t\g\k\h\j\e\y\z\y\x\w\0\j\g\8\f\1\1\j\6\s\o\k\k\5\t\8\c\x\1\l\a\h\4\8\0\s\i\1\8\a\c\x\s\7\0\m\v\n\j\t\n\y\x\o\h\0\s\8\y\h\c\f\p\n\s\e\b\t\y\p\1\q\t\k\2\p\7\r\z\j\s\q\6\2\e\w\h\y\p\n\0\m\j\k\6\8\g\l\c\w\n\e\5\z\1\o\u\h\e\w\9\0\v\j\4\l\2\3\d\f\o\r\x\t\1\6\5\m\7\b\b\j\3\r\4\j\8\n\3\7\6\m\o\8\a\e\v\z\h\b\n\b\o\s\o\4\7\h\h\n\h\f\o\n\e\4\f\x\u\i\6\u\1\t\e\p\g\v\p\s\6\m\1\9\q\8\p\m\l\v\j\h\4\h\k\l\6\3\n\w\z\m\6\m\8\7\s\t\w\n\a\l\e\c\2\l\0\f\z\q\x\3\0\r\q\3\5\1\u\q\a\x\9\z\t\r\v\3\p\f\y\1\u\q\7\d\3\c\m\9\f\a\m\d\l\o\r\k\q\2\k\z\a\5\s\n\k\u\i\l\q\x\p\6\w\a\f\5\c\w\a\p\1\r\t\8\v\u\o\r\s\5\p\t\a\m\j\c\f\j\l\h\0\9\0\n\n\6\6\o\m\j\0\n\3\9\c\2\v\m\h\l\k\3\d\5\9\v\q\1\o\h\5\5\2\2\d\o\y\1\8\q\1\x\w\b\9\5\k\f\9\4\5\t\4\y\9\d\v\q\9\p\w\i\b\r\5\g\g\y\i\a\e\j\4\g\7\6\a\t\b\c\i\9\f\r\7\l\o\8\6\d\y\a\f\d\l\0\y\f\v\u\a\q\p\q\d\0\a\9\q\x\i\0\c\u\8\j\c\x\g\q\y\y\f\7\0\6\a\7\k\9\a\k\t\b\d\x\f\3\4\z\o\k\w\t\b\a\8\0\m\g\x\6\e\5\b\a\q\g\g\r\w\7\l\a\2\p\o\b\9\7\6\n\e\d\t\v\h\j\h\p\a\t\3\y\v\o\z\y\4\p\i\a\t\b\b\2\6\b\c\l\r\7\q\4\2\y\v\3\7\t\r\x\j\1\w\e\c\e\f\e\7\f\6\l\j\y\p\u\8\m\6\i\c\o\y\y\v\e\c\v\w\x\e\j\v\4\8\5\m\5\l\7\1\c\w\g\i\m\u\i\4\7\5\9\5\m\b\y\z\5\o\4\s\t\o\k\g\g\v\g\1\r\s\7\v\o\8\3\4\v\s\a\c\9\8\m\4\6\q\f\3\5\o\z\a\r\4\5\7\3\7\h\b\c\0\h\p\e\q\w\s\c\u\y\h\8\2\e\u\j\k\n\4\n\6\b\3\i\q\i\7\7\w\l\p\w\4\5\d\7\8\g\d\a\d\o\m\r\5\e\l\9\i\6\y\x\v\a\v\q\j\s\4\u\b\i\k\4\i\f\l\v\p\j\r\n\1\8\b\0\b\l\e\o\l\q\5\9\0\b\h\9\w\o\1\1\i\z\s\n\i\9\l\i\8\b\b\j\e\b\h\b\2\j\n\o\j\q\9\y\n\n\f\m\z\u\8\t\l\p\x\j\v\k\u\5\b\2\b\7\a\o\2\j\g\1\6\m\x\k\e\4\b\c\x\e\i\3\q\r\m\8\4\q\3\p\z\p\h\i\a\6\s\3\w\n\q\f\6\4\k\m\i\r\7\m\8\k\m\t\5\6\x\d\r\r\d\u\h\d\o\z\j\v\w\q\g\e\4\u\3\c\0\s\5\a\m\3\x\u\e\s\9\k\i\n\t\9\p\e\d\x\4\v\1\p\i\9\5\s\8\z\1\h\r\3\g\w\c\0\3\i\i\x\j\8\5\4\f\k\8\l\v\3\i\i\1\m\j\r\e\q\o\g\2\2\c\s\d\h\e\0\k\5\q\p\y\t\v\u\m\0\y\1\g\f\p\l\t\u\6\f\e\y\t\0\8\b\q\4\m\0\x\4\o\n\c\c\l\o\v\5\9\e\o\r\x\g\x\r\y\h\d\2\x\b\d\q\1\o\3\n\d\v\3\k\y\3\f\3\b\8\w\j\u\3\u\2\7\1\m\f\y\f\2\s\c\v\n\x\n\n\2\0\x\9\9\f\1\s\5\s\v\4\e\t\b\x\p\l\j\c\4\b\d\n\k\g\p\a\c\c\f\w\c\i\y\3\k\i\w\1\z\q\n\4\y\6\2\9\m\7\i\9\y\1\q\s\d\9\8\5\1\f\7\c\n\2\v\i\l\x\9\q\a\g\0\b\1\1\u\p\q\6\w\n\j\2\w\u\n\z\x\j\j\k\b\o\s\v\d\k\h\5\2\f\i\j\4\7\y\p\e\o\7\c\e\4\m\c\e\0\m\n\3\s\s\3\g\o\q\1\w\6\b\w\9\s\5\9\t\y\d\s\v\i\v\2\l\q\z\n\l\n\v\3\y\g\p\9\4\3\h\3\x\j\c\s\o\k\z\g\v\k\2\3\e\4\w\v\e\v\a\n\4\8\3\m\b\n\c\4\z\5\2\r\7\5\f\1\b\2\3\m\p\m\z\u\j\6\a\7\h\9\a\g\t\i\e\r\l\b\y\b\q\d\d\d\r\a\u\g\v\1\y\e\b\0\6\6\h\0\s\i\y\g\w\r\v\o\e\5\w\m\v\o\v\d\q\z\d\i\c\7\v\o\x\k\i\s\3\t\x\4\i\q\a\d\r\b\1\n\1\k\x\z\7\6\i\n\a\k\m\y\u\h\a\s\3\d\g\v\a\q\y\5\f\z\u\1\y\2\4\j\7\u\7 ]] 00:10:51.270 00:10:51.270 real 0m1.121s 00:10:51.270 user 0m0.814s 00:10:51.270 sys 0m0.398s 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:51.270 ************************************ 00:10:51.270 END TEST dd_rw_offset 00:10:51.270 ************************************ 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:51.270 13:15:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:51.270 { 00:10:51.270 "subsystems": [ 00:10:51.270 { 00:10:51.270 "subsystem": "bdev", 00:10:51.270 "config": [ 00:10:51.270 { 00:10:51.270 "params": { 00:10:51.270 "trtype": "pcie", 00:10:51.270 "traddr": "0000:00:10.0", 00:10:51.270 "name": "Nvme0" 00:10:51.270 }, 00:10:51.270 "method": "bdev_nvme_attach_controller" 00:10:51.270 }, 00:10:51.270 { 00:10:51.270 "method": "bdev_wait_for_examine" 00:10:51.270 } 00:10:51.270 ] 00:10:51.270 } 00:10:51.270 ] 00:10:51.270 } 00:10:51.270 [2024-09-27 13:15:52.980280] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:51.270 [2024-09-27 13:15:52.980384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59976 ] 00:10:51.270 [2024-09-27 13:15:53.114789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.528 [2024-09-27 13:15:53.167777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.528 [2024-09-27 13:15:53.197203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.788  Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:51.788 00:10:51.788 13:15:53 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:51.788 00:10:51.788 real 0m16.199s 00:10:51.788 user 0m12.046s 00:10:51.788 sys 0m4.808s 00:10:51.788 13:15:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.788 13:15:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:51.788 ************************************ 00:10:51.788 END TEST spdk_dd_basic_rw 00:10:51.788 ************************************ 00:10:51.788 13:15:53 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:10:51.788 13:15:53 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:51.788 13:15:53 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.788 13:15:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:51.788 ************************************ 00:10:51.788 START TEST spdk_dd_posix 00:10:51.788 ************************************ 00:10:51.788 13:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:10:51.788 * Looking for test storage... 00:10:51.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:51.788 13:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:51.788 13:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:10:51.788 13:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:52.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.047 --rc genhtml_branch_coverage=1 00:10:52.047 --rc genhtml_function_coverage=1 00:10:52.047 --rc genhtml_legend=1 00:10:52.047 --rc geninfo_all_blocks=1 00:10:52.047 --rc geninfo_unexecuted_blocks=1 00:10:52.047 00:10:52.047 ' 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:52.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.047 --rc genhtml_branch_coverage=1 00:10:52.047 --rc genhtml_function_coverage=1 00:10:52.047 --rc genhtml_legend=1 00:10:52.047 --rc geninfo_all_blocks=1 00:10:52.047 --rc geninfo_unexecuted_blocks=1 00:10:52.047 00:10:52.047 ' 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:52.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.047 --rc genhtml_branch_coverage=1 00:10:52.047 --rc genhtml_function_coverage=1 00:10:52.047 --rc genhtml_legend=1 00:10:52.047 --rc geninfo_all_blocks=1 00:10:52.047 --rc geninfo_unexecuted_blocks=1 00:10:52.047 00:10:52.047 ' 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:52.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.047 --rc genhtml_branch_coverage=1 00:10:52.047 --rc genhtml_function_coverage=1 00:10:52.047 --rc genhtml_legend=1 00:10:52.047 --rc geninfo_all_blocks=1 00:10:52.047 --rc geninfo_unexecuted_blocks=1 00:10:52.047 00:10:52.047 ' 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.047 13:15:53 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:10:52.048 * First test run, liburing in use 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:52.048 ************************************ 00:10:52.048 START TEST dd_flag_append 00:10:52.048 ************************************ 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=szpv4u4fffnhcmpt9u83h7a3mr7l3h94 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=zj021zo0k6ej8bm69z42w9cgj19bzory 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s szpv4u4fffnhcmpt9u83h7a3mr7l3h94 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s zj021zo0k6ej8bm69z42w9cgj19bzory 00:10:52.048 13:15:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:10:52.048 [2024-09-27 13:15:53.781352] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:52.048 [2024-09-27 13:15:53.781603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60048 ] 00:10:52.306 [2024-09-27 13:15:53.920402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.306 [2024-09-27 13:15:53.978149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.306 [2024-09-27 13:15:54.008147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.565  Copying: 32/32 [B] (average 31 kBps) 00:10:52.565 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ zj021zo0k6ej8bm69z42w9cgj19bzoryszpv4u4fffnhcmpt9u83h7a3mr7l3h94 == \z\j\0\2\1\z\o\0\k\6\e\j\8\b\m\6\9\z\4\2\w\9\c\g\j\1\9\b\z\o\r\y\s\z\p\v\4\u\4\f\f\f\n\h\c\m\p\t\9\u\8\3\h\7\a\3\m\r\7\l\3\h\9\4 ]] 00:10:52.565 00:10:52.565 real 0m0.459s 00:10:52.565 user 0m0.248s 00:10:52.565 sys 0m0.174s 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:52.565 ************************************ 00:10:52.565 END TEST dd_flag_append 00:10:52.565 ************************************ 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:52.565 ************************************ 00:10:52.565 START TEST dd_flag_directory 00:10:52.565 ************************************ 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:52.565 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:52.565 [2024-09-27 13:15:54.290456] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:52.565 [2024-09-27 13:15:54.290711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60071 ] 00:10:52.824 [2024-09-27 13:15:54.430586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.824 [2024-09-27 13:15:54.509763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.824 [2024-09-27 13:15:54.545660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.824 [2024-09-27 13:15:54.570067] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:52.824 [2024-09-27 13:15:54.570130] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:52.824 [2024-09-27 13:15:54.570156] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:52.824 [2024-09-27 13:15:54.639850] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:53.082 13:15:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:53.082 [2024-09-27 13:15:54.804199] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:53.082 [2024-09-27 13:15:54.804292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60085 ] 00:10:53.341 [2024-09-27 13:15:54.939444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.341 [2024-09-27 13:15:54.998742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.341 [2024-09-27 13:15:55.030029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:53.341 [2024-09-27 13:15:55.050240] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:53.341 [2024-09-27 13:15:55.050312] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:53.341 [2024-09-27 13:15:55.050344] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:53.341 [2024-09-27 13:15:55.116057] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:53.599 00:10:53.599 real 0m0.975s 00:10:53.599 user 0m0.546s 00:10:53.599 sys 0m0.218s 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:10:53.599 ************************************ 00:10:53.599 END TEST dd_flag_directory 00:10:53.599 ************************************ 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:53.599 ************************************ 00:10:53.599 START TEST dd_flag_nofollow 00:10:53.599 ************************************ 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:53.599 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.600 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.600 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.600 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.600 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.600 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.600 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.600 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:53.600 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:53.600 [2024-09-27 13:15:55.321342] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:53.600 [2024-09-27 13:15:55.321433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60109 ] 00:10:53.858 [2024-09-27 13:15:55.460633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.858 [2024-09-27 13:15:55.518424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.858 [2024-09-27 13:15:55.550135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:53.858 [2024-09-27 13:15:55.570192] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:53.858 [2024-09-27 13:15:55.570248] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:53.858 [2024-09-27 13:15:55.570287] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:53.858 [2024-09-27 13:15:55.633571] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:54.117 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:10:54.117 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:54.117 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:10:54.117 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:10:54.117 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:10:54.117 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:54.118 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:54.118 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:10:54.118 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:54.118 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:54.118 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:54.118 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:54.118 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:54.118 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:54.118 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:54.118 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:54.118 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:54.118 13:15:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:54.118 [2024-09-27 13:15:55.779052] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:54.118 [2024-09-27 13:15:55.779218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60119 ] 00:10:54.118 [2024-09-27 13:15:55.918711] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.376 [2024-09-27 13:15:55.978957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.376 [2024-09-27 13:15:56.007012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.376 [2024-09-27 13:15:56.024474] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:54.376 [2024-09-27 13:15:56.024522] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:54.376 [2024-09-27 13:15:56.024554] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:54.376 [2024-09-27 13:15:56.088123] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:54.376 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:10:54.376 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:54.376 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:10:54.376 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:10:54.376 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:10:54.376 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:54.376 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:10:54.376 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:10:54.376 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:10:54.376 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:54.635 [2024-09-27 13:15:56.245537] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:54.635 [2024-09-27 13:15:56.245617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60126 ] 00:10:54.635 [2024-09-27 13:15:56.380231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.635 [2024-09-27 13:15:56.435517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.635 [2024-09-27 13:15:56.463438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.893  Copying: 512/512 [B] (average 500 kBps) 00:10:54.893 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ cnld1g502fyqs8j732etkl1a6j7uy2xxkycd47esdy9vu9z1uzdgln5y8edcpd2ucmmwhfqf32w0z0uu811r5lvsxfczfgfx5al7yivx1dnil5hust3lc8yw9klktcbdkf21vgreph2lku15lbs7hgcq1pjli11qdhyigfrf9ueqn0j4vhquy2zfv12ji7hp7winw9uhl69qbypjhxqesvpmj8qj9rgpg8wqqf49mes9ejdfc27yp30h0rokscr4ol50ebztw0syxi1qi38gllmk0kbed1hf47j83012y5j2mgv9pfp6l1bhj5v4wzr0uib0snuukquwozrjakq3p5y3avx0y3bwtlowmxqsy6551zr766n410yl5brlmt70sasctgfy7yg8om34aycy5qfsc306usl055smq76e677wpstx5asd3orwdi09o1h8s99swtb8bclozafh3s1xl81ket995wov76wr14621at2hquq394mztx1vuwcviwa == \c\n\l\d\1\g\5\0\2\f\y\q\s\8\j\7\3\2\e\t\k\l\1\a\6\j\7\u\y\2\x\x\k\y\c\d\4\7\e\s\d\y\9\v\u\9\z\1\u\z\d\g\l\n\5\y\8\e\d\c\p\d\2\u\c\m\m\w\h\f\q\f\3\2\w\0\z\0\u\u\8\1\1\r\5\l\v\s\x\f\c\z\f\g\f\x\5\a\l\7\y\i\v\x\1\d\n\i\l\5\h\u\s\t\3\l\c\8\y\w\9\k\l\k\t\c\b\d\k\f\2\1\v\g\r\e\p\h\2\l\k\u\1\5\l\b\s\7\h\g\c\q\1\p\j\l\i\1\1\q\d\h\y\i\g\f\r\f\9\u\e\q\n\0\j\4\v\h\q\u\y\2\z\f\v\1\2\j\i\7\h\p\7\w\i\n\w\9\u\h\l\6\9\q\b\y\p\j\h\x\q\e\s\v\p\m\j\8\q\j\9\r\g\p\g\8\w\q\q\f\4\9\m\e\s\9\e\j\d\f\c\2\7\y\p\3\0\h\0\r\o\k\s\c\r\4\o\l\5\0\e\b\z\t\w\0\s\y\x\i\1\q\i\3\8\g\l\l\m\k\0\k\b\e\d\1\h\f\4\7\j\8\3\0\1\2\y\5\j\2\m\g\v\9\p\f\p\6\l\1\b\h\j\5\v\4\w\z\r\0\u\i\b\0\s\n\u\u\k\q\u\w\o\z\r\j\a\k\q\3\p\5\y\3\a\v\x\0\y\3\b\w\t\l\o\w\m\x\q\s\y\6\5\5\1\z\r\7\6\6\n\4\1\0\y\l\5\b\r\l\m\t\7\0\s\a\s\c\t\g\f\y\7\y\g\8\o\m\3\4\a\y\c\y\5\q\f\s\c\3\0\6\u\s\l\0\5\5\s\m\q\7\6\e\6\7\7\w\p\s\t\x\5\a\s\d\3\o\r\w\d\i\0\9\o\1\h\8\s\9\9\s\w\t\b\8\b\c\l\o\z\a\f\h\3\s\1\x\l\8\1\k\e\t\9\9\5\w\o\v\7\6\w\r\1\4\6\2\1\a\t\2\h\q\u\q\3\9\4\m\z\t\x\1\v\u\w\c\v\i\w\a ]] 00:10:54.893 00:10:54.893 real 0m1.382s 00:10:54.893 user 0m0.769s 00:10:54.893 sys 0m0.375s 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:10:54.893 ************************************ 00:10:54.893 END TEST dd_flag_nofollow 00:10:54.893 ************************************ 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:54.893 ************************************ 00:10:54.893 START TEST dd_flag_noatime 00:10:54.893 ************************************ 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1727442956 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1727442956 00:10:54.893 13:15:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:10:56.302 13:15:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:56.302 [2024-09-27 13:15:57.769749] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:56.302 [2024-09-27 13:15:57.769846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60169 ] 00:10:56.302 [2024-09-27 13:15:57.905549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.302 [2024-09-27 13:15:57.978020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.302 [2024-09-27 13:15:58.013092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.561  Copying: 512/512 [B] (average 500 kBps) 00:10:56.562 00:10:56.562 13:15:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:56.562 13:15:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1727442956 )) 00:10:56.562 13:15:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:56.562 13:15:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1727442956 )) 00:10:56.562 13:15:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:56.562 [2024-09-27 13:15:58.250080] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:56.562 [2024-09-27 13:15:58.250196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60182 ] 00:10:56.562 [2024-09-27 13:15:58.388610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.821 [2024-09-27 13:15:58.446370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.821 [2024-09-27 13:15:58.476063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.821  Copying: 512/512 [B] (average 500 kBps) 00:10:56.821 00:10:56.821 13:15:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:56.821 ************************************ 00:10:56.821 END TEST dd_flag_noatime 00:10:56.821 ************************************ 00:10:56.821 13:15:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1727442958 )) 00:10:56.821 00:10:56.821 real 0m1.963s 00:10:56.821 user 0m0.516s 00:10:56.821 sys 0m0.405s 00:10:56.821 13:15:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.821 13:15:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:57.080 ************************************ 00:10:57.080 START TEST dd_flags_misc 00:10:57.080 ************************************ 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:57.080 13:15:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:57.080 [2024-09-27 13:15:58.776346] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:57.080 [2024-09-27 13:15:58.776619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60211 ] 00:10:57.080 [2024-09-27 13:15:58.916669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.339 [2024-09-27 13:15:58.977272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.339 [2024-09-27 13:15:59.004807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.339  Copying: 512/512 [B] (average 500 kBps) 00:10:57.339 00:10:57.339 13:15:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lw39opdny0wqqnu6ww9rve173jd6e59eu156d3j0vjhgdt22h0h4mzzxk7t2337s9z4t696chjw44hutqsws658v3rxo5kdymkzax1nfdjssgfwm6i115zehc8tzzpb04fl7ilsfdgi2o998xq1xl7qd1djrcrt8v882501bj360tq6f8maxcbhytvuhe8d9fjrd3z6v2ixtp70zrdev4zap5qpnxt9acr5o95gmpsc3vvad55n5fw8ajg1i1jd7exw3ta56ome664hehtqjuwnnp7yz0l1kccrxcnwh5vvzcroz56xt6fdjrso7to43mb09cvka22vj7h29df8xtbtkgcdmyltgbwc26xwzi196qwhf2vomo8416ag8jic2asmanh3dbaio8e56p2cr8l42k0hf2lwz9nohfw5v93rs17sqc3z7u8ozkpjve2k7ykfg1whjtm5bx5heehff1y0ys9xxl46jxnaog9lon1sllag2721dzkev4nh3y77j == \l\w\3\9\o\p\d\n\y\0\w\q\q\n\u\6\w\w\9\r\v\e\1\7\3\j\d\6\e\5\9\e\u\1\5\6\d\3\j\0\v\j\h\g\d\t\2\2\h\0\h\4\m\z\z\x\k\7\t\2\3\3\7\s\9\z\4\t\6\9\6\c\h\j\w\4\4\h\u\t\q\s\w\s\6\5\8\v\3\r\x\o\5\k\d\y\m\k\z\a\x\1\n\f\d\j\s\s\g\f\w\m\6\i\1\1\5\z\e\h\c\8\t\z\z\p\b\0\4\f\l\7\i\l\s\f\d\g\i\2\o\9\9\8\x\q\1\x\l\7\q\d\1\d\j\r\c\r\t\8\v\8\8\2\5\0\1\b\j\3\6\0\t\q\6\f\8\m\a\x\c\b\h\y\t\v\u\h\e\8\d\9\f\j\r\d\3\z\6\v\2\i\x\t\p\7\0\z\r\d\e\v\4\z\a\p\5\q\p\n\x\t\9\a\c\r\5\o\9\5\g\m\p\s\c\3\v\v\a\d\5\5\n\5\f\w\8\a\j\g\1\i\1\j\d\7\e\x\w\3\t\a\5\6\o\m\e\6\6\4\h\e\h\t\q\j\u\w\n\n\p\7\y\z\0\l\1\k\c\c\r\x\c\n\w\h\5\v\v\z\c\r\o\z\5\6\x\t\6\f\d\j\r\s\o\7\t\o\4\3\m\b\0\9\c\v\k\a\2\2\v\j\7\h\2\9\d\f\8\x\t\b\t\k\g\c\d\m\y\l\t\g\b\w\c\2\6\x\w\z\i\1\9\6\q\w\h\f\2\v\o\m\o\8\4\1\6\a\g\8\j\i\c\2\a\s\m\a\n\h\3\d\b\a\i\o\8\e\5\6\p\2\c\r\8\l\4\2\k\0\h\f\2\l\w\z\9\n\o\h\f\w\5\v\9\3\r\s\1\7\s\q\c\3\z\7\u\8\o\z\k\p\j\v\e\2\k\7\y\k\f\g\1\w\h\j\t\m\5\b\x\5\h\e\e\h\f\f\1\y\0\y\s\9\x\x\l\4\6\j\x\n\a\o\g\9\l\o\n\1\s\l\l\a\g\2\7\2\1\d\z\k\e\v\4\n\h\3\y\7\7\j ]] 00:10:57.339 13:15:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:57.339 13:15:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:57.598 [2024-09-27 13:15:59.228265] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:57.598 [2024-09-27 13:15:59.228381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60220 ] 00:10:57.598 [2024-09-27 13:15:59.361529] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.598 [2024-09-27 13:15:59.418116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.857 [2024-09-27 13:15:59.446879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.857  Copying: 512/512 [B] (average 500 kBps) 00:10:57.857 00:10:57.857 13:15:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lw39opdny0wqqnu6ww9rve173jd6e59eu156d3j0vjhgdt22h0h4mzzxk7t2337s9z4t696chjw44hutqsws658v3rxo5kdymkzax1nfdjssgfwm6i115zehc8tzzpb04fl7ilsfdgi2o998xq1xl7qd1djrcrt8v882501bj360tq6f8maxcbhytvuhe8d9fjrd3z6v2ixtp70zrdev4zap5qpnxt9acr5o95gmpsc3vvad55n5fw8ajg1i1jd7exw3ta56ome664hehtqjuwnnp7yz0l1kccrxcnwh5vvzcroz56xt6fdjrso7to43mb09cvka22vj7h29df8xtbtkgcdmyltgbwc26xwzi196qwhf2vomo8416ag8jic2asmanh3dbaio8e56p2cr8l42k0hf2lwz9nohfw5v93rs17sqc3z7u8ozkpjve2k7ykfg1whjtm5bx5heehff1y0ys9xxl46jxnaog9lon1sllag2721dzkev4nh3y77j == \l\w\3\9\o\p\d\n\y\0\w\q\q\n\u\6\w\w\9\r\v\e\1\7\3\j\d\6\e\5\9\e\u\1\5\6\d\3\j\0\v\j\h\g\d\t\2\2\h\0\h\4\m\z\z\x\k\7\t\2\3\3\7\s\9\z\4\t\6\9\6\c\h\j\w\4\4\h\u\t\q\s\w\s\6\5\8\v\3\r\x\o\5\k\d\y\m\k\z\a\x\1\n\f\d\j\s\s\g\f\w\m\6\i\1\1\5\z\e\h\c\8\t\z\z\p\b\0\4\f\l\7\i\l\s\f\d\g\i\2\o\9\9\8\x\q\1\x\l\7\q\d\1\d\j\r\c\r\t\8\v\8\8\2\5\0\1\b\j\3\6\0\t\q\6\f\8\m\a\x\c\b\h\y\t\v\u\h\e\8\d\9\f\j\r\d\3\z\6\v\2\i\x\t\p\7\0\z\r\d\e\v\4\z\a\p\5\q\p\n\x\t\9\a\c\r\5\o\9\5\g\m\p\s\c\3\v\v\a\d\5\5\n\5\f\w\8\a\j\g\1\i\1\j\d\7\e\x\w\3\t\a\5\6\o\m\e\6\6\4\h\e\h\t\q\j\u\w\n\n\p\7\y\z\0\l\1\k\c\c\r\x\c\n\w\h\5\v\v\z\c\r\o\z\5\6\x\t\6\f\d\j\r\s\o\7\t\o\4\3\m\b\0\9\c\v\k\a\2\2\v\j\7\h\2\9\d\f\8\x\t\b\t\k\g\c\d\m\y\l\t\g\b\w\c\2\6\x\w\z\i\1\9\6\q\w\h\f\2\v\o\m\o\8\4\1\6\a\g\8\j\i\c\2\a\s\m\a\n\h\3\d\b\a\i\o\8\e\5\6\p\2\c\r\8\l\4\2\k\0\h\f\2\l\w\z\9\n\o\h\f\w\5\v\9\3\r\s\1\7\s\q\c\3\z\7\u\8\o\z\k\p\j\v\e\2\k\7\y\k\f\g\1\w\h\j\t\m\5\b\x\5\h\e\e\h\f\f\1\y\0\y\s\9\x\x\l\4\6\j\x\n\a\o\g\9\l\o\n\1\s\l\l\a\g\2\7\2\1\d\z\k\e\v\4\n\h\3\y\7\7\j ]] 00:10:57.857 13:15:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:57.857 13:15:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:57.857 [2024-09-27 13:15:59.664376] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:57.857 [2024-09-27 13:15:59.664484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60229 ] 00:10:58.117 [2024-09-27 13:15:59.803527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.117 [2024-09-27 13:15:59.860461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.117 [2024-09-27 13:15:59.887709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:58.375  Copying: 512/512 [B] (average 83 kBps) 00:10:58.375 00:10:58.375 13:16:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lw39opdny0wqqnu6ww9rve173jd6e59eu156d3j0vjhgdt22h0h4mzzxk7t2337s9z4t696chjw44hutqsws658v3rxo5kdymkzax1nfdjssgfwm6i115zehc8tzzpb04fl7ilsfdgi2o998xq1xl7qd1djrcrt8v882501bj360tq6f8maxcbhytvuhe8d9fjrd3z6v2ixtp70zrdev4zap5qpnxt9acr5o95gmpsc3vvad55n5fw8ajg1i1jd7exw3ta56ome664hehtqjuwnnp7yz0l1kccrxcnwh5vvzcroz56xt6fdjrso7to43mb09cvka22vj7h29df8xtbtkgcdmyltgbwc26xwzi196qwhf2vomo8416ag8jic2asmanh3dbaio8e56p2cr8l42k0hf2lwz9nohfw5v93rs17sqc3z7u8ozkpjve2k7ykfg1whjtm5bx5heehff1y0ys9xxl46jxnaog9lon1sllag2721dzkev4nh3y77j == \l\w\3\9\o\p\d\n\y\0\w\q\q\n\u\6\w\w\9\r\v\e\1\7\3\j\d\6\e\5\9\e\u\1\5\6\d\3\j\0\v\j\h\g\d\t\2\2\h\0\h\4\m\z\z\x\k\7\t\2\3\3\7\s\9\z\4\t\6\9\6\c\h\j\w\4\4\h\u\t\q\s\w\s\6\5\8\v\3\r\x\o\5\k\d\y\m\k\z\a\x\1\n\f\d\j\s\s\g\f\w\m\6\i\1\1\5\z\e\h\c\8\t\z\z\p\b\0\4\f\l\7\i\l\s\f\d\g\i\2\o\9\9\8\x\q\1\x\l\7\q\d\1\d\j\r\c\r\t\8\v\8\8\2\5\0\1\b\j\3\6\0\t\q\6\f\8\m\a\x\c\b\h\y\t\v\u\h\e\8\d\9\f\j\r\d\3\z\6\v\2\i\x\t\p\7\0\z\r\d\e\v\4\z\a\p\5\q\p\n\x\t\9\a\c\r\5\o\9\5\g\m\p\s\c\3\v\v\a\d\5\5\n\5\f\w\8\a\j\g\1\i\1\j\d\7\e\x\w\3\t\a\5\6\o\m\e\6\6\4\h\e\h\t\q\j\u\w\n\n\p\7\y\z\0\l\1\k\c\c\r\x\c\n\w\h\5\v\v\z\c\r\o\z\5\6\x\t\6\f\d\j\r\s\o\7\t\o\4\3\m\b\0\9\c\v\k\a\2\2\v\j\7\h\2\9\d\f\8\x\t\b\t\k\g\c\d\m\y\l\t\g\b\w\c\2\6\x\w\z\i\1\9\6\q\w\h\f\2\v\o\m\o\8\4\1\6\a\g\8\j\i\c\2\a\s\m\a\n\h\3\d\b\a\i\o\8\e\5\6\p\2\c\r\8\l\4\2\k\0\h\f\2\l\w\z\9\n\o\h\f\w\5\v\9\3\r\s\1\7\s\q\c\3\z\7\u\8\o\z\k\p\j\v\e\2\k\7\y\k\f\g\1\w\h\j\t\m\5\b\x\5\h\e\e\h\f\f\1\y\0\y\s\9\x\x\l\4\6\j\x\n\a\o\g\9\l\o\n\1\s\l\l\a\g\2\7\2\1\d\z\k\e\v\4\n\h\3\y\7\7\j ]] 00:10:58.375 13:16:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:58.375 13:16:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:58.375 [2024-09-27 13:16:00.125503] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:58.375 [2024-09-27 13:16:00.125813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60239 ] 00:10:58.634 [2024-09-27 13:16:00.263223] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.634 [2024-09-27 13:16:00.318201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.634 [2024-09-27 13:16:00.346035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:58.893  Copying: 512/512 [B] (average 166 kBps) 00:10:58.893 00:10:58.893 13:16:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lw39opdny0wqqnu6ww9rve173jd6e59eu156d3j0vjhgdt22h0h4mzzxk7t2337s9z4t696chjw44hutqsws658v3rxo5kdymkzax1nfdjssgfwm6i115zehc8tzzpb04fl7ilsfdgi2o998xq1xl7qd1djrcrt8v882501bj360tq6f8maxcbhytvuhe8d9fjrd3z6v2ixtp70zrdev4zap5qpnxt9acr5o95gmpsc3vvad55n5fw8ajg1i1jd7exw3ta56ome664hehtqjuwnnp7yz0l1kccrxcnwh5vvzcroz56xt6fdjrso7to43mb09cvka22vj7h29df8xtbtkgcdmyltgbwc26xwzi196qwhf2vomo8416ag8jic2asmanh3dbaio8e56p2cr8l42k0hf2lwz9nohfw5v93rs17sqc3z7u8ozkpjve2k7ykfg1whjtm5bx5heehff1y0ys9xxl46jxnaog9lon1sllag2721dzkev4nh3y77j == \l\w\3\9\o\p\d\n\y\0\w\q\q\n\u\6\w\w\9\r\v\e\1\7\3\j\d\6\e\5\9\e\u\1\5\6\d\3\j\0\v\j\h\g\d\t\2\2\h\0\h\4\m\z\z\x\k\7\t\2\3\3\7\s\9\z\4\t\6\9\6\c\h\j\w\4\4\h\u\t\q\s\w\s\6\5\8\v\3\r\x\o\5\k\d\y\m\k\z\a\x\1\n\f\d\j\s\s\g\f\w\m\6\i\1\1\5\z\e\h\c\8\t\z\z\p\b\0\4\f\l\7\i\l\s\f\d\g\i\2\o\9\9\8\x\q\1\x\l\7\q\d\1\d\j\r\c\r\t\8\v\8\8\2\5\0\1\b\j\3\6\0\t\q\6\f\8\m\a\x\c\b\h\y\t\v\u\h\e\8\d\9\f\j\r\d\3\z\6\v\2\i\x\t\p\7\0\z\r\d\e\v\4\z\a\p\5\q\p\n\x\t\9\a\c\r\5\o\9\5\g\m\p\s\c\3\v\v\a\d\5\5\n\5\f\w\8\a\j\g\1\i\1\j\d\7\e\x\w\3\t\a\5\6\o\m\e\6\6\4\h\e\h\t\q\j\u\w\n\n\p\7\y\z\0\l\1\k\c\c\r\x\c\n\w\h\5\v\v\z\c\r\o\z\5\6\x\t\6\f\d\j\r\s\o\7\t\o\4\3\m\b\0\9\c\v\k\a\2\2\v\j\7\h\2\9\d\f\8\x\t\b\t\k\g\c\d\m\y\l\t\g\b\w\c\2\6\x\w\z\i\1\9\6\q\w\h\f\2\v\o\m\o\8\4\1\6\a\g\8\j\i\c\2\a\s\m\a\n\h\3\d\b\a\i\o\8\e\5\6\p\2\c\r\8\l\4\2\k\0\h\f\2\l\w\z\9\n\o\h\f\w\5\v\9\3\r\s\1\7\s\q\c\3\z\7\u\8\o\z\k\p\j\v\e\2\k\7\y\k\f\g\1\w\h\j\t\m\5\b\x\5\h\e\e\h\f\f\1\y\0\y\s\9\x\x\l\4\6\j\x\n\a\o\g\9\l\o\n\1\s\l\l\a\g\2\7\2\1\d\z\k\e\v\4\n\h\3\y\7\7\j ]] 00:10:58.893 13:16:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:58.893 13:16:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:10:58.893 13:16:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:10:58.893 13:16:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:58.893 13:16:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:58.893 13:16:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:58.893 [2024-09-27 13:16:00.608257] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:58.893 [2024-09-27 13:16:00.608501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60243 ] 00:10:59.152 [2024-09-27 13:16:00.750024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.152 [2024-09-27 13:16:00.819735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.152 [2024-09-27 13:16:00.852811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:59.412  Copying: 512/512 [B] (average 500 kBps) 00:10:59.412 00:10:59.412 13:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0fwgymdzxcl74dbxphuttjlddgvsjzh6plzpq6mbolqcj1sa0r5sa7765otucjj033dbt6q5vx38q1hyfrtf8nbqevntz3wgd4g8vzki9q5af6xcoat5h6jci4w1ha5uevrxnyrmm7xs9aq9b8tc5d029hje7vtwgzw2l748d9pa5ak2eujrr7mngdvxrmryc5e5ubjyueuynsaaw1fpdu8ya78ntrngpli6abddg38x29rvdgdi3xl4dojxm8o0olo8bl06emfch5dc1l6u2misbif6oa72lwmf9djgns7hn3j0jy0rg5qmy2mb8j885bon37a4fzzw7tvyljloxcfjakeldyzuli9pnc398f0g9g00trajuw0gra6qifdv0xpt8qnx4ied7yl9wp17ulbxjcfof743b4t1ng0tnskgjs5zsfp3gw73leduphw49jnhmv468xrzj63c8cz5wb5xmghp6xvuniv83aqnl0dh23jp6jvbbii6t3b3vixj == \0\f\w\g\y\m\d\z\x\c\l\7\4\d\b\x\p\h\u\t\t\j\l\d\d\g\v\s\j\z\h\6\p\l\z\p\q\6\m\b\o\l\q\c\j\1\s\a\0\r\5\s\a\7\7\6\5\o\t\u\c\j\j\0\3\3\d\b\t\6\q\5\v\x\3\8\q\1\h\y\f\r\t\f\8\n\b\q\e\v\n\t\z\3\w\g\d\4\g\8\v\z\k\i\9\q\5\a\f\6\x\c\o\a\t\5\h\6\j\c\i\4\w\1\h\a\5\u\e\v\r\x\n\y\r\m\m\7\x\s\9\a\q\9\b\8\t\c\5\d\0\2\9\h\j\e\7\v\t\w\g\z\w\2\l\7\4\8\d\9\p\a\5\a\k\2\e\u\j\r\r\7\m\n\g\d\v\x\r\m\r\y\c\5\e\5\u\b\j\y\u\e\u\y\n\s\a\a\w\1\f\p\d\u\8\y\a\7\8\n\t\r\n\g\p\l\i\6\a\b\d\d\g\3\8\x\2\9\r\v\d\g\d\i\3\x\l\4\d\o\j\x\m\8\o\0\o\l\o\8\b\l\0\6\e\m\f\c\h\5\d\c\1\l\6\u\2\m\i\s\b\i\f\6\o\a\7\2\l\w\m\f\9\d\j\g\n\s\7\h\n\3\j\0\j\y\0\r\g\5\q\m\y\2\m\b\8\j\8\8\5\b\o\n\3\7\a\4\f\z\z\w\7\t\v\y\l\j\l\o\x\c\f\j\a\k\e\l\d\y\z\u\l\i\9\p\n\c\3\9\8\f\0\g\9\g\0\0\t\r\a\j\u\w\0\g\r\a\6\q\i\f\d\v\0\x\p\t\8\q\n\x\4\i\e\d\7\y\l\9\w\p\1\7\u\l\b\x\j\c\f\o\f\7\4\3\b\4\t\1\n\g\0\t\n\s\k\g\j\s\5\z\s\f\p\3\g\w\7\3\l\e\d\u\p\h\w\4\9\j\n\h\m\v\4\6\8\x\r\z\j\6\3\c\8\c\z\5\w\b\5\x\m\g\h\p\6\x\v\u\n\i\v\8\3\a\q\n\l\0\d\h\2\3\j\p\6\j\v\b\b\i\i\6\t\3\b\3\v\i\x\j ]] 00:10:59.412 13:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:59.412 13:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:59.412 [2024-09-27 13:16:01.080890] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:59.412 [2024-09-27 13:16:01.080985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60257 ] 00:10:59.412 [2024-09-27 13:16:01.215992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.670 [2024-09-27 13:16:01.276368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.670 [2024-09-27 13:16:01.307030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:59.670  Copying: 512/512 [B] (average 500 kBps) 00:10:59.670 00:10:59.670 13:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0fwgymdzxcl74dbxphuttjlddgvsjzh6plzpq6mbolqcj1sa0r5sa7765otucjj033dbt6q5vx38q1hyfrtf8nbqevntz3wgd4g8vzki9q5af6xcoat5h6jci4w1ha5uevrxnyrmm7xs9aq9b8tc5d029hje7vtwgzw2l748d9pa5ak2eujrr7mngdvxrmryc5e5ubjyueuynsaaw1fpdu8ya78ntrngpli6abddg38x29rvdgdi3xl4dojxm8o0olo8bl06emfch5dc1l6u2misbif6oa72lwmf9djgns7hn3j0jy0rg5qmy2mb8j885bon37a4fzzw7tvyljloxcfjakeldyzuli9pnc398f0g9g00trajuw0gra6qifdv0xpt8qnx4ied7yl9wp17ulbxjcfof743b4t1ng0tnskgjs5zsfp3gw73leduphw49jnhmv468xrzj63c8cz5wb5xmghp6xvuniv83aqnl0dh23jp6jvbbii6t3b3vixj == \0\f\w\g\y\m\d\z\x\c\l\7\4\d\b\x\p\h\u\t\t\j\l\d\d\g\v\s\j\z\h\6\p\l\z\p\q\6\m\b\o\l\q\c\j\1\s\a\0\r\5\s\a\7\7\6\5\o\t\u\c\j\j\0\3\3\d\b\t\6\q\5\v\x\3\8\q\1\h\y\f\r\t\f\8\n\b\q\e\v\n\t\z\3\w\g\d\4\g\8\v\z\k\i\9\q\5\a\f\6\x\c\o\a\t\5\h\6\j\c\i\4\w\1\h\a\5\u\e\v\r\x\n\y\r\m\m\7\x\s\9\a\q\9\b\8\t\c\5\d\0\2\9\h\j\e\7\v\t\w\g\z\w\2\l\7\4\8\d\9\p\a\5\a\k\2\e\u\j\r\r\7\m\n\g\d\v\x\r\m\r\y\c\5\e\5\u\b\j\y\u\e\u\y\n\s\a\a\w\1\f\p\d\u\8\y\a\7\8\n\t\r\n\g\p\l\i\6\a\b\d\d\g\3\8\x\2\9\r\v\d\g\d\i\3\x\l\4\d\o\j\x\m\8\o\0\o\l\o\8\b\l\0\6\e\m\f\c\h\5\d\c\1\l\6\u\2\m\i\s\b\i\f\6\o\a\7\2\l\w\m\f\9\d\j\g\n\s\7\h\n\3\j\0\j\y\0\r\g\5\q\m\y\2\m\b\8\j\8\8\5\b\o\n\3\7\a\4\f\z\z\w\7\t\v\y\l\j\l\o\x\c\f\j\a\k\e\l\d\y\z\u\l\i\9\p\n\c\3\9\8\f\0\g\9\g\0\0\t\r\a\j\u\w\0\g\r\a\6\q\i\f\d\v\0\x\p\t\8\q\n\x\4\i\e\d\7\y\l\9\w\p\1\7\u\l\b\x\j\c\f\o\f\7\4\3\b\4\t\1\n\g\0\t\n\s\k\g\j\s\5\z\s\f\p\3\g\w\7\3\l\e\d\u\p\h\w\4\9\j\n\h\m\v\4\6\8\x\r\z\j\6\3\c\8\c\z\5\w\b\5\x\m\g\h\p\6\x\v\u\n\i\v\8\3\a\q\n\l\0\d\h\2\3\j\p\6\j\v\b\b\i\i\6\t\3\b\3\v\i\x\j ]] 00:10:59.670 13:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:59.670 13:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:59.929 [2024-09-27 13:16:01.527228] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:10:59.929 [2024-09-27 13:16:01.527478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60264 ] 00:10:59.929 [2024-09-27 13:16:01.664534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.929 [2024-09-27 13:16:01.726528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.929 [2024-09-27 13:16:01.755359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:00.188  Copying: 512/512 [B] (average 250 kBps) 00:11:00.188 00:11:00.188 13:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0fwgymdzxcl74dbxphuttjlddgvsjzh6plzpq6mbolqcj1sa0r5sa7765otucjj033dbt6q5vx38q1hyfrtf8nbqevntz3wgd4g8vzki9q5af6xcoat5h6jci4w1ha5uevrxnyrmm7xs9aq9b8tc5d029hje7vtwgzw2l748d9pa5ak2eujrr7mngdvxrmryc5e5ubjyueuynsaaw1fpdu8ya78ntrngpli6abddg38x29rvdgdi3xl4dojxm8o0olo8bl06emfch5dc1l6u2misbif6oa72lwmf9djgns7hn3j0jy0rg5qmy2mb8j885bon37a4fzzw7tvyljloxcfjakeldyzuli9pnc398f0g9g00trajuw0gra6qifdv0xpt8qnx4ied7yl9wp17ulbxjcfof743b4t1ng0tnskgjs5zsfp3gw73leduphw49jnhmv468xrzj63c8cz5wb5xmghp6xvuniv83aqnl0dh23jp6jvbbii6t3b3vixj == \0\f\w\g\y\m\d\z\x\c\l\7\4\d\b\x\p\h\u\t\t\j\l\d\d\g\v\s\j\z\h\6\p\l\z\p\q\6\m\b\o\l\q\c\j\1\s\a\0\r\5\s\a\7\7\6\5\o\t\u\c\j\j\0\3\3\d\b\t\6\q\5\v\x\3\8\q\1\h\y\f\r\t\f\8\n\b\q\e\v\n\t\z\3\w\g\d\4\g\8\v\z\k\i\9\q\5\a\f\6\x\c\o\a\t\5\h\6\j\c\i\4\w\1\h\a\5\u\e\v\r\x\n\y\r\m\m\7\x\s\9\a\q\9\b\8\t\c\5\d\0\2\9\h\j\e\7\v\t\w\g\z\w\2\l\7\4\8\d\9\p\a\5\a\k\2\e\u\j\r\r\7\m\n\g\d\v\x\r\m\r\y\c\5\e\5\u\b\j\y\u\e\u\y\n\s\a\a\w\1\f\p\d\u\8\y\a\7\8\n\t\r\n\g\p\l\i\6\a\b\d\d\g\3\8\x\2\9\r\v\d\g\d\i\3\x\l\4\d\o\j\x\m\8\o\0\o\l\o\8\b\l\0\6\e\m\f\c\h\5\d\c\1\l\6\u\2\m\i\s\b\i\f\6\o\a\7\2\l\w\m\f\9\d\j\g\n\s\7\h\n\3\j\0\j\y\0\r\g\5\q\m\y\2\m\b\8\j\8\8\5\b\o\n\3\7\a\4\f\z\z\w\7\t\v\y\l\j\l\o\x\c\f\j\a\k\e\l\d\y\z\u\l\i\9\p\n\c\3\9\8\f\0\g\9\g\0\0\t\r\a\j\u\w\0\g\r\a\6\q\i\f\d\v\0\x\p\t\8\q\n\x\4\i\e\d\7\y\l\9\w\p\1\7\u\l\b\x\j\c\f\o\f\7\4\3\b\4\t\1\n\g\0\t\n\s\k\g\j\s\5\z\s\f\p\3\g\w\7\3\l\e\d\u\p\h\w\4\9\j\n\h\m\v\4\6\8\x\r\z\j\6\3\c\8\c\z\5\w\b\5\x\m\g\h\p\6\x\v\u\n\i\v\8\3\a\q\n\l\0\d\h\2\3\j\p\6\j\v\b\b\i\i\6\t\3\b\3\v\i\x\j ]] 00:11:00.188 13:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:00.188 13:16:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:00.188 [2024-09-27 13:16:01.976936] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:00.188 [2024-09-27 13:16:01.977031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60274 ] 00:11:00.447 [2024-09-27 13:16:02.116850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.447 [2024-09-27 13:16:02.170858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.447 [2024-09-27 13:16:02.199465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:00.705  Copying: 512/512 [B] (average 166 kBps) 00:11:00.705 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0fwgymdzxcl74dbxphuttjlddgvsjzh6plzpq6mbolqcj1sa0r5sa7765otucjj033dbt6q5vx38q1hyfrtf8nbqevntz3wgd4g8vzki9q5af6xcoat5h6jci4w1ha5uevrxnyrmm7xs9aq9b8tc5d029hje7vtwgzw2l748d9pa5ak2eujrr7mngdvxrmryc5e5ubjyueuynsaaw1fpdu8ya78ntrngpli6abddg38x29rvdgdi3xl4dojxm8o0olo8bl06emfch5dc1l6u2misbif6oa72lwmf9djgns7hn3j0jy0rg5qmy2mb8j885bon37a4fzzw7tvyljloxcfjakeldyzuli9pnc398f0g9g00trajuw0gra6qifdv0xpt8qnx4ied7yl9wp17ulbxjcfof743b4t1ng0tnskgjs5zsfp3gw73leduphw49jnhmv468xrzj63c8cz5wb5xmghp6xvuniv83aqnl0dh23jp6jvbbii6t3b3vixj == \0\f\w\g\y\m\d\z\x\c\l\7\4\d\b\x\p\h\u\t\t\j\l\d\d\g\v\s\j\z\h\6\p\l\z\p\q\6\m\b\o\l\q\c\j\1\s\a\0\r\5\s\a\7\7\6\5\o\t\u\c\j\j\0\3\3\d\b\t\6\q\5\v\x\3\8\q\1\h\y\f\r\t\f\8\n\b\q\e\v\n\t\z\3\w\g\d\4\g\8\v\z\k\i\9\q\5\a\f\6\x\c\o\a\t\5\h\6\j\c\i\4\w\1\h\a\5\u\e\v\r\x\n\y\r\m\m\7\x\s\9\a\q\9\b\8\t\c\5\d\0\2\9\h\j\e\7\v\t\w\g\z\w\2\l\7\4\8\d\9\p\a\5\a\k\2\e\u\j\r\r\7\m\n\g\d\v\x\r\m\r\y\c\5\e\5\u\b\j\y\u\e\u\y\n\s\a\a\w\1\f\p\d\u\8\y\a\7\8\n\t\r\n\g\p\l\i\6\a\b\d\d\g\3\8\x\2\9\r\v\d\g\d\i\3\x\l\4\d\o\j\x\m\8\o\0\o\l\o\8\b\l\0\6\e\m\f\c\h\5\d\c\1\l\6\u\2\m\i\s\b\i\f\6\o\a\7\2\l\w\m\f\9\d\j\g\n\s\7\h\n\3\j\0\j\y\0\r\g\5\q\m\y\2\m\b\8\j\8\8\5\b\o\n\3\7\a\4\f\z\z\w\7\t\v\y\l\j\l\o\x\c\f\j\a\k\e\l\d\y\z\u\l\i\9\p\n\c\3\9\8\f\0\g\9\g\0\0\t\r\a\j\u\w\0\g\r\a\6\q\i\f\d\v\0\x\p\t\8\q\n\x\4\i\e\d\7\y\l\9\w\p\1\7\u\l\b\x\j\c\f\o\f\7\4\3\b\4\t\1\n\g\0\t\n\s\k\g\j\s\5\z\s\f\p\3\g\w\7\3\l\e\d\u\p\h\w\4\9\j\n\h\m\v\4\6\8\x\r\z\j\6\3\c\8\c\z\5\w\b\5\x\m\g\h\p\6\x\v\u\n\i\v\8\3\a\q\n\l\0\d\h\2\3\j\p\6\j\v\b\b\i\i\6\t\3\b\3\v\i\x\j ]] 00:11:00.705 00:11:00.705 real 0m3.662s 00:11:00.705 user 0m2.009s 00:11:00.705 sys 0m1.419s 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.705 ************************************ 00:11:00.705 END TEST dd_flags_misc 00:11:00.705 ************************************ 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:11:00.705 * Second test run, disabling liburing, forcing AIO 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:00.705 ************************************ 00:11:00.705 START TEST dd_flag_append_forced_aio 00:11:00.705 ************************************ 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=tnjxbg2lqoksgslludusb4asgtqorlzo 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=1i37hcqaos5z7pttqk46rdvs6mnepxe2 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s tnjxbg2lqoksgslludusb4asgtqorlzo 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 1i37hcqaos5z7pttqk46rdvs6mnepxe2 00:11:00.705 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:11:00.705 [2024-09-27 13:16:02.485076] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:00.705 [2024-09-27 13:16:02.485166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60302 ] 00:11:00.963 [2024-09-27 13:16:02.624309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.963 [2024-09-27 13:16:02.698014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.963 [2024-09-27 13:16:02.733350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.222  Copying: 32/32 [B] (average 31 kBps) 00:11:01.222 00:11:01.222 ************************************ 00:11:01.222 END TEST dd_flag_append_forced_aio 00:11:01.222 ************************************ 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 1i37hcqaos5z7pttqk46rdvs6mnepxe2tnjxbg2lqoksgslludusb4asgtqorlzo == \1\i\3\7\h\c\q\a\o\s\5\z\7\p\t\t\q\k\4\6\r\d\v\s\6\m\n\e\p\x\e\2\t\n\j\x\b\g\2\l\q\o\k\s\g\s\l\l\u\d\u\s\b\4\a\s\g\t\q\o\r\l\z\o ]] 00:11:01.222 00:11:01.222 real 0m0.530s 00:11:01.222 user 0m0.285s 00:11:01.222 sys 0m0.121s 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:01.222 ************************************ 00:11:01.222 START TEST dd_flag_directory_forced_aio 00:11:01.222 ************************************ 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.222 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.222 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.223 13:16:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.223 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.223 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:01.223 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:01.223 [2024-09-27 13:16:03.065973] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:01.223 [2024-09-27 13:16:03.066120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60329 ] 00:11:01.481 [2024-09-27 13:16:03.205723] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.481 [2024-09-27 13:16:03.275678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.481 [2024-09-27 13:16:03.309102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.740 [2024-09-27 13:16:03.330559] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:01.740 [2024-09-27 13:16:03.330636] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:01.740 [2024-09-27 13:16:03.330660] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:01.740 [2024-09-27 13:16:03.401909] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:01.740 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:01.740 [2024-09-27 13:16:03.554460] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:01.740 [2024-09-27 13:16:03.554562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60338 ] 00:11:01.999 [2024-09-27 13:16:03.692572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.999 [2024-09-27 13:16:03.754980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.999 [2024-09-27 13:16:03.783842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.999 [2024-09-27 13:16:03.801729] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:01.999 [2024-09-27 13:16:03.801797] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:01.999 [2024-09-27 13:16:03.801826] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:02.258 [2024-09-27 13:16:03.863547] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:02.258 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:11:02.258 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:02.258 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:11:02.258 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:11:02.258 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:11:02.258 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:02.258 00:11:02.258 real 0m0.949s 00:11:02.258 user 0m0.526s 00:11:02.258 sys 0m0.214s 00:11:02.258 ************************************ 00:11:02.258 END TEST dd_flag_directory_forced_aio 00:11:02.258 ************************************ 00:11:02.258 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.258 13:16:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:02.258 13:16:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:11:02.258 13:16:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:02.258 13:16:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.258 13:16:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:02.258 ************************************ 00:11:02.258 START TEST dd_flag_nofollow_forced_aio 00:11:02.258 ************************************ 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:02.258 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:02.258 [2024-09-27 13:16:04.071671] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:02.258 [2024-09-27 13:16:04.071800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60367 ] 00:11:02.521 [2024-09-27 13:16:04.207278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.521 [2024-09-27 13:16:04.265890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.521 [2024-09-27 13:16:04.295884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:02.521 [2024-09-27 13:16:04.314895] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:02.521 [2024-09-27 13:16:04.314956] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:02.521 [2024-09-27 13:16:04.314978] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:02.780 [2024-09-27 13:16:04.379090] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:02.780 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:02.780 [2024-09-27 13:16:04.528940] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:02.780 [2024-09-27 13:16:04.529051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60376 ] 00:11:03.039 [2024-09-27 13:16:04.668291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.039 [2024-09-27 13:16:04.729151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.039 [2024-09-27 13:16:04.761981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:03.039 [2024-09-27 13:16:04.780758] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:03.039 [2024-09-27 13:16:04.780824] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:03.039 [2024-09-27 13:16:04.780837] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:03.039 [2024-09-27 13:16:04.841587] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:03.299 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:11:03.299 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:03.299 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:11:03.299 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:11:03.299 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:11:03.299 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:03.299 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:11:03.299 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:03.299 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:03.300 13:16:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:03.300 [2024-09-27 13:16:04.983224] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:03.300 [2024-09-27 13:16:04.983326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60384 ] 00:11:03.300 [2024-09-27 13:16:05.121352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.559 [2024-09-27 13:16:05.176722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.559 [2024-09-27 13:16:05.205271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:03.559  Copying: 512/512 [B] (average 500 kBps) 00:11:03.559 00:11:03.559 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ f45upfbt2jp7oy969fftbxyefer89zsum2xyijnchbiaou3f5sht0y4bktcx7gwsgrr971quvuabrjrt3t25epzulxcva8xbzlqhepul7bweb3h9ot9fhm03a9874gfu2wkazd3c48md8dvgj93teal2ecyvnsbtt85ygw11sokf9yx178hmdeaqfj59yicihtcigg6t68arf356aboym60675sw8fqboi0jtc86x7fstkv9p3ncfr6n0855mbh0oo0u33dp284g30tmqdg4cw2xkcx44f9ihx12ctnowpfbhfuzy3w3qt2sehiw55zxbg0jm5ifezwntqgmsgguux1td2ygompr118k2rxxh7tov0u61k4y8isn25vsbk5jsxjuer7bue3sc1qqv9g75ww8dq90wtuitwyt9uar24of3h5r9yqii7qin9f997rqcf9cfm6a054f6sdf0cvyd10m2ygr9tr8jf35em0uki7pk0uh76z2pzr98o3c1l4x == \f\4\5\u\p\f\b\t\2\j\p\7\o\y\9\6\9\f\f\t\b\x\y\e\f\e\r\8\9\z\s\u\m\2\x\y\i\j\n\c\h\b\i\a\o\u\3\f\5\s\h\t\0\y\4\b\k\t\c\x\7\g\w\s\g\r\r\9\7\1\q\u\v\u\a\b\r\j\r\t\3\t\2\5\e\p\z\u\l\x\c\v\a\8\x\b\z\l\q\h\e\p\u\l\7\b\w\e\b\3\h\9\o\t\9\f\h\m\0\3\a\9\8\7\4\g\f\u\2\w\k\a\z\d\3\c\4\8\m\d\8\d\v\g\j\9\3\t\e\a\l\2\e\c\y\v\n\s\b\t\t\8\5\y\g\w\1\1\s\o\k\f\9\y\x\1\7\8\h\m\d\e\a\q\f\j\5\9\y\i\c\i\h\t\c\i\g\g\6\t\6\8\a\r\f\3\5\6\a\b\o\y\m\6\0\6\7\5\s\w\8\f\q\b\o\i\0\j\t\c\8\6\x\7\f\s\t\k\v\9\p\3\n\c\f\r\6\n\0\8\5\5\m\b\h\0\o\o\0\u\3\3\d\p\2\8\4\g\3\0\t\m\q\d\g\4\c\w\2\x\k\c\x\4\4\f\9\i\h\x\1\2\c\t\n\o\w\p\f\b\h\f\u\z\y\3\w\3\q\t\2\s\e\h\i\w\5\5\z\x\b\g\0\j\m\5\i\f\e\z\w\n\t\q\g\m\s\g\g\u\u\x\1\t\d\2\y\g\o\m\p\r\1\1\8\k\2\r\x\x\h\7\t\o\v\0\u\6\1\k\4\y\8\i\s\n\2\5\v\s\b\k\5\j\s\x\j\u\e\r\7\b\u\e\3\s\c\1\q\q\v\9\g\7\5\w\w\8\d\q\9\0\w\t\u\i\t\w\y\t\9\u\a\r\2\4\o\f\3\h\5\r\9\y\q\i\i\7\q\i\n\9\f\9\9\7\r\q\c\f\9\c\f\m\6\a\0\5\4\f\6\s\d\f\0\c\v\y\d\1\0\m\2\y\g\r\9\t\r\8\j\f\3\5\e\m\0\u\k\i\7\p\k\0\u\h\7\6\z\2\p\z\r\9\8\o\3\c\1\l\4\x ]] 00:11:03.559 00:11:03.559 real 0m1.384s 00:11:03.559 user 0m0.746s 00:11:03.559 sys 0m0.308s 00:11:03.559 ************************************ 00:11:03.559 END TEST dd_flag_nofollow_forced_aio 00:11:03.559 ************************************ 00:11:03.559 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.559 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:03.817 ************************************ 00:11:03.817 START TEST dd_flag_noatime_forced_aio 00:11:03.817 ************************************ 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1727442965 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1727442965 00:11:03.817 13:16:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:11:04.752 13:16:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:04.752 [2024-09-27 13:16:06.513578] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:04.753 [2024-09-27 13:16:06.513697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60424 ] 00:11:05.011 [2024-09-27 13:16:06.653923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.011 [2024-09-27 13:16:06.722954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.011 [2024-09-27 13:16:06.756513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.271  Copying: 512/512 [B] (average 500 kBps) 00:11:05.271 00:11:05.271 13:16:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:05.271 13:16:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1727442965 )) 00:11:05.271 13:16:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:05.271 13:16:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1727442965 )) 00:11:05.271 13:16:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:05.271 [2024-09-27 13:16:07.039908] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:05.271 [2024-09-27 13:16:07.040039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60436 ] 00:11:05.530 [2024-09-27 13:16:07.177227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.530 [2024-09-27 13:16:07.235980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.530 [2024-09-27 13:16:07.264811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.789  Copying: 512/512 [B] (average 500 kBps) 00:11:05.789 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1727442967 )) 00:11:05.789 00:11:05.789 real 0m2.045s 00:11:05.789 user 0m0.577s 00:11:05.789 sys 0m0.224s 00:11:05.789 ************************************ 00:11:05.789 END TEST dd_flag_noatime_forced_aio 00:11:05.789 ************************************ 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:05.789 ************************************ 00:11:05.789 START TEST dd_flags_misc_forced_aio 00:11:05.789 ************************************ 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:05.789 13:16:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:05.789 [2024-09-27 13:16:07.583168] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:05.789 [2024-09-27 13:16:07.583258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60462 ] 00:11:06.048 [2024-09-27 13:16:07.713170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.048 [2024-09-27 13:16:07.772079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.048 [2024-09-27 13:16:07.799899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:06.351  Copying: 512/512 [B] (average 500 kBps) 00:11:06.351 00:11:06.351 13:16:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iefxtt387bn76qh7b1f8wr666157hwnpkst0v1yy61gkh9cb98y5zo116fn16x8xxisqbjxynqjemfl44pvdeo7quxun1qe4bexscujz63d26fkm935lftcs5qbqs3y29x9zmtq4lxbi4yvcpl777mktrm2ax1sceo8oo9evud7n0wgy2lowcktzci395zlj32potzj406vxcrztqix9k44pkyxw55fbi1t6rt2lbsc09qpdi4f2y4rutombjpcb9snguviqzgz1h5es7gw34yiwmxq7dmnxfae7ka5i25nlfspns30cykry1bx6qr37kyd94m9w5zuuxkdd9krt60ivjd6dfsyfmur65rpn3jy8ytt9r2rqw3eqmkkpglvqlaa90l0nxj4vtxfff8u9hcxvt4vj4ks7obkxcox3pqnf00tjzceme9jr7hftxf7gh366kiulx6pmjpxucq2r6lyvm7c3p6fhpaxktfj7o31kd4oxl62fvbunk1k890g3 == \i\e\f\x\t\t\3\8\7\b\n\7\6\q\h\7\b\1\f\8\w\r\6\6\6\1\5\7\h\w\n\p\k\s\t\0\v\1\y\y\6\1\g\k\h\9\c\b\9\8\y\5\z\o\1\1\6\f\n\1\6\x\8\x\x\i\s\q\b\j\x\y\n\q\j\e\m\f\l\4\4\p\v\d\e\o\7\q\u\x\u\n\1\q\e\4\b\e\x\s\c\u\j\z\6\3\d\2\6\f\k\m\9\3\5\l\f\t\c\s\5\q\b\q\s\3\y\2\9\x\9\z\m\t\q\4\l\x\b\i\4\y\v\c\p\l\7\7\7\m\k\t\r\m\2\a\x\1\s\c\e\o\8\o\o\9\e\v\u\d\7\n\0\w\g\y\2\l\o\w\c\k\t\z\c\i\3\9\5\z\l\j\3\2\p\o\t\z\j\4\0\6\v\x\c\r\z\t\q\i\x\9\k\4\4\p\k\y\x\w\5\5\f\b\i\1\t\6\r\t\2\l\b\s\c\0\9\q\p\d\i\4\f\2\y\4\r\u\t\o\m\b\j\p\c\b\9\s\n\g\u\v\i\q\z\g\z\1\h\5\e\s\7\g\w\3\4\y\i\w\m\x\q\7\d\m\n\x\f\a\e\7\k\a\5\i\2\5\n\l\f\s\p\n\s\3\0\c\y\k\r\y\1\b\x\6\q\r\3\7\k\y\d\9\4\m\9\w\5\z\u\u\x\k\d\d\9\k\r\t\6\0\i\v\j\d\6\d\f\s\y\f\m\u\r\6\5\r\p\n\3\j\y\8\y\t\t\9\r\2\r\q\w\3\e\q\m\k\k\p\g\l\v\q\l\a\a\9\0\l\0\n\x\j\4\v\t\x\f\f\f\8\u\9\h\c\x\v\t\4\v\j\4\k\s\7\o\b\k\x\c\o\x\3\p\q\n\f\0\0\t\j\z\c\e\m\e\9\j\r\7\h\f\t\x\f\7\g\h\3\6\6\k\i\u\l\x\6\p\m\j\p\x\u\c\q\2\r\6\l\y\v\m\7\c\3\p\6\f\h\p\a\x\k\t\f\j\7\o\3\1\k\d\4\o\x\l\6\2\f\v\b\u\n\k\1\k\8\9\0\g\3 ]] 00:11:06.351 13:16:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:06.351 13:16:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:06.351 [2024-09-27 13:16:08.070300] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:06.351 [2024-09-27 13:16:08.070421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60470 ] 00:11:06.610 [2024-09-27 13:16:08.208980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.610 [2024-09-27 13:16:08.279703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.610 [2024-09-27 13:16:08.313671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:06.869  Copying: 512/512 [B] (average 500 kBps) 00:11:06.869 00:11:06.869 13:16:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iefxtt387bn76qh7b1f8wr666157hwnpkst0v1yy61gkh9cb98y5zo116fn16x8xxisqbjxynqjemfl44pvdeo7quxun1qe4bexscujz63d26fkm935lftcs5qbqs3y29x9zmtq4lxbi4yvcpl777mktrm2ax1sceo8oo9evud7n0wgy2lowcktzci395zlj32potzj406vxcrztqix9k44pkyxw55fbi1t6rt2lbsc09qpdi4f2y4rutombjpcb9snguviqzgz1h5es7gw34yiwmxq7dmnxfae7ka5i25nlfspns30cykry1bx6qr37kyd94m9w5zuuxkdd9krt60ivjd6dfsyfmur65rpn3jy8ytt9r2rqw3eqmkkpglvqlaa90l0nxj4vtxfff8u9hcxvt4vj4ks7obkxcox3pqnf00tjzceme9jr7hftxf7gh366kiulx6pmjpxucq2r6lyvm7c3p6fhpaxktfj7o31kd4oxl62fvbunk1k890g3 == \i\e\f\x\t\t\3\8\7\b\n\7\6\q\h\7\b\1\f\8\w\r\6\6\6\1\5\7\h\w\n\p\k\s\t\0\v\1\y\y\6\1\g\k\h\9\c\b\9\8\y\5\z\o\1\1\6\f\n\1\6\x\8\x\x\i\s\q\b\j\x\y\n\q\j\e\m\f\l\4\4\p\v\d\e\o\7\q\u\x\u\n\1\q\e\4\b\e\x\s\c\u\j\z\6\3\d\2\6\f\k\m\9\3\5\l\f\t\c\s\5\q\b\q\s\3\y\2\9\x\9\z\m\t\q\4\l\x\b\i\4\y\v\c\p\l\7\7\7\m\k\t\r\m\2\a\x\1\s\c\e\o\8\o\o\9\e\v\u\d\7\n\0\w\g\y\2\l\o\w\c\k\t\z\c\i\3\9\5\z\l\j\3\2\p\o\t\z\j\4\0\6\v\x\c\r\z\t\q\i\x\9\k\4\4\p\k\y\x\w\5\5\f\b\i\1\t\6\r\t\2\l\b\s\c\0\9\q\p\d\i\4\f\2\y\4\r\u\t\o\m\b\j\p\c\b\9\s\n\g\u\v\i\q\z\g\z\1\h\5\e\s\7\g\w\3\4\y\i\w\m\x\q\7\d\m\n\x\f\a\e\7\k\a\5\i\2\5\n\l\f\s\p\n\s\3\0\c\y\k\r\y\1\b\x\6\q\r\3\7\k\y\d\9\4\m\9\w\5\z\u\u\x\k\d\d\9\k\r\t\6\0\i\v\j\d\6\d\f\s\y\f\m\u\r\6\5\r\p\n\3\j\y\8\y\t\t\9\r\2\r\q\w\3\e\q\m\k\k\p\g\l\v\q\l\a\a\9\0\l\0\n\x\j\4\v\t\x\f\f\f\8\u\9\h\c\x\v\t\4\v\j\4\k\s\7\o\b\k\x\c\o\x\3\p\q\n\f\0\0\t\j\z\c\e\m\e\9\j\r\7\h\f\t\x\f\7\g\h\3\6\6\k\i\u\l\x\6\p\m\j\p\x\u\c\q\2\r\6\l\y\v\m\7\c\3\p\6\f\h\p\a\x\k\t\f\j\7\o\3\1\k\d\4\o\x\l\6\2\f\v\b\u\n\k\1\k\8\9\0\g\3 ]] 00:11:06.869 13:16:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:06.869 13:16:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:06.869 [2024-09-27 13:16:08.585635] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:06.869 [2024-09-27 13:16:08.585773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60477 ] 00:11:07.127 [2024-09-27 13:16:08.721675] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.127 [2024-09-27 13:16:08.784417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.127 [2024-09-27 13:16:08.816765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:07.386  Copying: 512/512 [B] (average 125 kBps) 00:11:07.386 00:11:07.386 13:16:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iefxtt387bn76qh7b1f8wr666157hwnpkst0v1yy61gkh9cb98y5zo116fn16x8xxisqbjxynqjemfl44pvdeo7quxun1qe4bexscujz63d26fkm935lftcs5qbqs3y29x9zmtq4lxbi4yvcpl777mktrm2ax1sceo8oo9evud7n0wgy2lowcktzci395zlj32potzj406vxcrztqix9k44pkyxw55fbi1t6rt2lbsc09qpdi4f2y4rutombjpcb9snguviqzgz1h5es7gw34yiwmxq7dmnxfae7ka5i25nlfspns30cykry1bx6qr37kyd94m9w5zuuxkdd9krt60ivjd6dfsyfmur65rpn3jy8ytt9r2rqw3eqmkkpglvqlaa90l0nxj4vtxfff8u9hcxvt4vj4ks7obkxcox3pqnf00tjzceme9jr7hftxf7gh366kiulx6pmjpxucq2r6lyvm7c3p6fhpaxktfj7o31kd4oxl62fvbunk1k890g3 == \i\e\f\x\t\t\3\8\7\b\n\7\6\q\h\7\b\1\f\8\w\r\6\6\6\1\5\7\h\w\n\p\k\s\t\0\v\1\y\y\6\1\g\k\h\9\c\b\9\8\y\5\z\o\1\1\6\f\n\1\6\x\8\x\x\i\s\q\b\j\x\y\n\q\j\e\m\f\l\4\4\p\v\d\e\o\7\q\u\x\u\n\1\q\e\4\b\e\x\s\c\u\j\z\6\3\d\2\6\f\k\m\9\3\5\l\f\t\c\s\5\q\b\q\s\3\y\2\9\x\9\z\m\t\q\4\l\x\b\i\4\y\v\c\p\l\7\7\7\m\k\t\r\m\2\a\x\1\s\c\e\o\8\o\o\9\e\v\u\d\7\n\0\w\g\y\2\l\o\w\c\k\t\z\c\i\3\9\5\z\l\j\3\2\p\o\t\z\j\4\0\6\v\x\c\r\z\t\q\i\x\9\k\4\4\p\k\y\x\w\5\5\f\b\i\1\t\6\r\t\2\l\b\s\c\0\9\q\p\d\i\4\f\2\y\4\r\u\t\o\m\b\j\p\c\b\9\s\n\g\u\v\i\q\z\g\z\1\h\5\e\s\7\g\w\3\4\y\i\w\m\x\q\7\d\m\n\x\f\a\e\7\k\a\5\i\2\5\n\l\f\s\p\n\s\3\0\c\y\k\r\y\1\b\x\6\q\r\3\7\k\y\d\9\4\m\9\w\5\z\u\u\x\k\d\d\9\k\r\t\6\0\i\v\j\d\6\d\f\s\y\f\m\u\r\6\5\r\p\n\3\j\y\8\y\t\t\9\r\2\r\q\w\3\e\q\m\k\k\p\g\l\v\q\l\a\a\9\0\l\0\n\x\j\4\v\t\x\f\f\f\8\u\9\h\c\x\v\t\4\v\j\4\k\s\7\o\b\k\x\c\o\x\3\p\q\n\f\0\0\t\j\z\c\e\m\e\9\j\r\7\h\f\t\x\f\7\g\h\3\6\6\k\i\u\l\x\6\p\m\j\p\x\u\c\q\2\r\6\l\y\v\m\7\c\3\p\6\f\h\p\a\x\k\t\f\j\7\o\3\1\k\d\4\o\x\l\6\2\f\v\b\u\n\k\1\k\8\9\0\g\3 ]] 00:11:07.386 13:16:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:07.386 13:16:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:07.386 [2024-09-27 13:16:09.109753] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:07.386 [2024-09-27 13:16:09.109878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60487 ] 00:11:07.647 [2024-09-27 13:16:09.249281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.647 [2024-09-27 13:16:09.309045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.647 [2024-09-27 13:16:09.339026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:07.907  Copying: 512/512 [B] (average 166 kBps) 00:11:07.907 00:11:07.907 13:16:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iefxtt387bn76qh7b1f8wr666157hwnpkst0v1yy61gkh9cb98y5zo116fn16x8xxisqbjxynqjemfl44pvdeo7quxun1qe4bexscujz63d26fkm935lftcs5qbqs3y29x9zmtq4lxbi4yvcpl777mktrm2ax1sceo8oo9evud7n0wgy2lowcktzci395zlj32potzj406vxcrztqix9k44pkyxw55fbi1t6rt2lbsc09qpdi4f2y4rutombjpcb9snguviqzgz1h5es7gw34yiwmxq7dmnxfae7ka5i25nlfspns30cykry1bx6qr37kyd94m9w5zuuxkdd9krt60ivjd6dfsyfmur65rpn3jy8ytt9r2rqw3eqmkkpglvqlaa90l0nxj4vtxfff8u9hcxvt4vj4ks7obkxcox3pqnf00tjzceme9jr7hftxf7gh366kiulx6pmjpxucq2r6lyvm7c3p6fhpaxktfj7o31kd4oxl62fvbunk1k890g3 == \i\e\f\x\t\t\3\8\7\b\n\7\6\q\h\7\b\1\f\8\w\r\6\6\6\1\5\7\h\w\n\p\k\s\t\0\v\1\y\y\6\1\g\k\h\9\c\b\9\8\y\5\z\o\1\1\6\f\n\1\6\x\8\x\x\i\s\q\b\j\x\y\n\q\j\e\m\f\l\4\4\p\v\d\e\o\7\q\u\x\u\n\1\q\e\4\b\e\x\s\c\u\j\z\6\3\d\2\6\f\k\m\9\3\5\l\f\t\c\s\5\q\b\q\s\3\y\2\9\x\9\z\m\t\q\4\l\x\b\i\4\y\v\c\p\l\7\7\7\m\k\t\r\m\2\a\x\1\s\c\e\o\8\o\o\9\e\v\u\d\7\n\0\w\g\y\2\l\o\w\c\k\t\z\c\i\3\9\5\z\l\j\3\2\p\o\t\z\j\4\0\6\v\x\c\r\z\t\q\i\x\9\k\4\4\p\k\y\x\w\5\5\f\b\i\1\t\6\r\t\2\l\b\s\c\0\9\q\p\d\i\4\f\2\y\4\r\u\t\o\m\b\j\p\c\b\9\s\n\g\u\v\i\q\z\g\z\1\h\5\e\s\7\g\w\3\4\y\i\w\m\x\q\7\d\m\n\x\f\a\e\7\k\a\5\i\2\5\n\l\f\s\p\n\s\3\0\c\y\k\r\y\1\b\x\6\q\r\3\7\k\y\d\9\4\m\9\w\5\z\u\u\x\k\d\d\9\k\r\t\6\0\i\v\j\d\6\d\f\s\y\f\m\u\r\6\5\r\p\n\3\j\y\8\y\t\t\9\r\2\r\q\w\3\e\q\m\k\k\p\g\l\v\q\l\a\a\9\0\l\0\n\x\j\4\v\t\x\f\f\f\8\u\9\h\c\x\v\t\4\v\j\4\k\s\7\o\b\k\x\c\o\x\3\p\q\n\f\0\0\t\j\z\c\e\m\e\9\j\r\7\h\f\t\x\f\7\g\h\3\6\6\k\i\u\l\x\6\p\m\j\p\x\u\c\q\2\r\6\l\y\v\m\7\c\3\p\6\f\h\p\a\x\k\t\f\j\7\o\3\1\k\d\4\o\x\l\6\2\f\v\b\u\n\k\1\k\8\9\0\g\3 ]] 00:11:07.907 13:16:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:07.907 13:16:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:11:07.907 13:16:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:07.907 13:16:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:07.907 13:16:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:07.907 13:16:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:07.907 [2024-09-27 13:16:09.632179] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:07.907 [2024-09-27 13:16:09.632292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60494 ] 00:11:08.167 [2024-09-27 13:16:09.768343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.167 [2024-09-27 13:16:09.829148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.167 [2024-09-27 13:16:09.859501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.427  Copying: 512/512 [B] (average 500 kBps) 00:11:08.427 00:11:08.427 13:16:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8ko70wa6k31nd1y7bhy6rn616y5re0yzihmfydrymin5y4sd02l6bs1bwhsgsk4ehr7oi5xky0ji8bvwg0uho1wa80l5sg6po8kdcix1tk66pl8si2966edg04b7ptf7bslly170hzqcr9tqi2wgsckeuakyvvoviy6gq4ygrcrag8jdi8h5zoudffqtgrm7j3rq9sr9ac2gybwwzp9diyheykjhuxwivaj6azas71ctp75f928wwa9e0d9j0fze3uvo6pwmvpt7oohnz3ld6dd6sjso0a7s44mp1k9cnh7ikot8svmlagdehkku9fkb2030jedjd2jwdagztzu2ctdxkz2yxqnqb4qfpfcafxsxku7tgknuxlyhakotbrc0ns0f7eqwwfsd6r12nuba3fextoglq3bevb2h40z5hefwlcr7ady7pwxpd6u4iy35e6pp0vtkqkls1ymblqcz4hvaalxid7931d2370ksms2ak0vgjrajmjkxquncigck == \8\k\o\7\0\w\a\6\k\3\1\n\d\1\y\7\b\h\y\6\r\n\6\1\6\y\5\r\e\0\y\z\i\h\m\f\y\d\r\y\m\i\n\5\y\4\s\d\0\2\l\6\b\s\1\b\w\h\s\g\s\k\4\e\h\r\7\o\i\5\x\k\y\0\j\i\8\b\v\w\g\0\u\h\o\1\w\a\8\0\l\5\s\g\6\p\o\8\k\d\c\i\x\1\t\k\6\6\p\l\8\s\i\2\9\6\6\e\d\g\0\4\b\7\p\t\f\7\b\s\l\l\y\1\7\0\h\z\q\c\r\9\t\q\i\2\w\g\s\c\k\e\u\a\k\y\v\v\o\v\i\y\6\g\q\4\y\g\r\c\r\a\g\8\j\d\i\8\h\5\z\o\u\d\f\f\q\t\g\r\m\7\j\3\r\q\9\s\r\9\a\c\2\g\y\b\w\w\z\p\9\d\i\y\h\e\y\k\j\h\u\x\w\i\v\a\j\6\a\z\a\s\7\1\c\t\p\7\5\f\9\2\8\w\w\a\9\e\0\d\9\j\0\f\z\e\3\u\v\o\6\p\w\m\v\p\t\7\o\o\h\n\z\3\l\d\6\d\d\6\s\j\s\o\0\a\7\s\4\4\m\p\1\k\9\c\n\h\7\i\k\o\t\8\s\v\m\l\a\g\d\e\h\k\k\u\9\f\k\b\2\0\3\0\j\e\d\j\d\2\j\w\d\a\g\z\t\z\u\2\c\t\d\x\k\z\2\y\x\q\n\q\b\4\q\f\p\f\c\a\f\x\s\x\k\u\7\t\g\k\n\u\x\l\y\h\a\k\o\t\b\r\c\0\n\s\0\f\7\e\q\w\w\f\s\d\6\r\1\2\n\u\b\a\3\f\e\x\t\o\g\l\q\3\b\e\v\b\2\h\4\0\z\5\h\e\f\w\l\c\r\7\a\d\y\7\p\w\x\p\d\6\u\4\i\y\3\5\e\6\p\p\0\v\t\k\q\k\l\s\1\y\m\b\l\q\c\z\4\h\v\a\a\l\x\i\d\7\9\3\1\d\2\3\7\0\k\s\m\s\2\a\k\0\v\g\j\r\a\j\m\j\k\x\q\u\n\c\i\g\c\k ]] 00:11:08.427 13:16:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:08.427 13:16:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:08.427 [2024-09-27 13:16:10.128857] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:08.427 [2024-09-27 13:16:10.128980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60502 ] 00:11:08.427 [2024-09-27 13:16:10.268400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.686 [2024-09-27 13:16:10.327749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.686 [2024-09-27 13:16:10.358663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.945  Copying: 512/512 [B] (average 500 kBps) 00:11:08.945 00:11:08.945 13:16:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8ko70wa6k31nd1y7bhy6rn616y5re0yzihmfydrymin5y4sd02l6bs1bwhsgsk4ehr7oi5xky0ji8bvwg0uho1wa80l5sg6po8kdcix1tk66pl8si2966edg04b7ptf7bslly170hzqcr9tqi2wgsckeuakyvvoviy6gq4ygrcrag8jdi8h5zoudffqtgrm7j3rq9sr9ac2gybwwzp9diyheykjhuxwivaj6azas71ctp75f928wwa9e0d9j0fze3uvo6pwmvpt7oohnz3ld6dd6sjso0a7s44mp1k9cnh7ikot8svmlagdehkku9fkb2030jedjd2jwdagztzu2ctdxkz2yxqnqb4qfpfcafxsxku7tgknuxlyhakotbrc0ns0f7eqwwfsd6r12nuba3fextoglq3bevb2h40z5hefwlcr7ady7pwxpd6u4iy35e6pp0vtkqkls1ymblqcz4hvaalxid7931d2370ksms2ak0vgjrajmjkxquncigck == \8\k\o\7\0\w\a\6\k\3\1\n\d\1\y\7\b\h\y\6\r\n\6\1\6\y\5\r\e\0\y\z\i\h\m\f\y\d\r\y\m\i\n\5\y\4\s\d\0\2\l\6\b\s\1\b\w\h\s\g\s\k\4\e\h\r\7\o\i\5\x\k\y\0\j\i\8\b\v\w\g\0\u\h\o\1\w\a\8\0\l\5\s\g\6\p\o\8\k\d\c\i\x\1\t\k\6\6\p\l\8\s\i\2\9\6\6\e\d\g\0\4\b\7\p\t\f\7\b\s\l\l\y\1\7\0\h\z\q\c\r\9\t\q\i\2\w\g\s\c\k\e\u\a\k\y\v\v\o\v\i\y\6\g\q\4\y\g\r\c\r\a\g\8\j\d\i\8\h\5\z\o\u\d\f\f\q\t\g\r\m\7\j\3\r\q\9\s\r\9\a\c\2\g\y\b\w\w\z\p\9\d\i\y\h\e\y\k\j\h\u\x\w\i\v\a\j\6\a\z\a\s\7\1\c\t\p\7\5\f\9\2\8\w\w\a\9\e\0\d\9\j\0\f\z\e\3\u\v\o\6\p\w\m\v\p\t\7\o\o\h\n\z\3\l\d\6\d\d\6\s\j\s\o\0\a\7\s\4\4\m\p\1\k\9\c\n\h\7\i\k\o\t\8\s\v\m\l\a\g\d\e\h\k\k\u\9\f\k\b\2\0\3\0\j\e\d\j\d\2\j\w\d\a\g\z\t\z\u\2\c\t\d\x\k\z\2\y\x\q\n\q\b\4\q\f\p\f\c\a\f\x\s\x\k\u\7\t\g\k\n\u\x\l\y\h\a\k\o\t\b\r\c\0\n\s\0\f\7\e\q\w\w\f\s\d\6\r\1\2\n\u\b\a\3\f\e\x\t\o\g\l\q\3\b\e\v\b\2\h\4\0\z\5\h\e\f\w\l\c\r\7\a\d\y\7\p\w\x\p\d\6\u\4\i\y\3\5\e\6\p\p\0\v\t\k\q\k\l\s\1\y\m\b\l\q\c\z\4\h\v\a\a\l\x\i\d\7\9\3\1\d\2\3\7\0\k\s\m\s\2\a\k\0\v\g\j\r\a\j\m\j\k\x\q\u\n\c\i\g\c\k ]] 00:11:08.945 13:16:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:08.945 13:16:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:08.945 [2024-09-27 13:16:10.647094] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:08.945 [2024-09-27 13:16:10.647217] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60509 ] 00:11:08.945 [2024-09-27 13:16:10.784930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.204 [2024-09-27 13:16:10.845968] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.204 [2024-09-27 13:16:10.876707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.464  Copying: 512/512 [B] (average 166 kBps) 00:11:09.464 00:11:09.464 13:16:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8ko70wa6k31nd1y7bhy6rn616y5re0yzihmfydrymin5y4sd02l6bs1bwhsgsk4ehr7oi5xky0ji8bvwg0uho1wa80l5sg6po8kdcix1tk66pl8si2966edg04b7ptf7bslly170hzqcr9tqi2wgsckeuakyvvoviy6gq4ygrcrag8jdi8h5zoudffqtgrm7j3rq9sr9ac2gybwwzp9diyheykjhuxwivaj6azas71ctp75f928wwa9e0d9j0fze3uvo6pwmvpt7oohnz3ld6dd6sjso0a7s44mp1k9cnh7ikot8svmlagdehkku9fkb2030jedjd2jwdagztzu2ctdxkz2yxqnqb4qfpfcafxsxku7tgknuxlyhakotbrc0ns0f7eqwwfsd6r12nuba3fextoglq3bevb2h40z5hefwlcr7ady7pwxpd6u4iy35e6pp0vtkqkls1ymblqcz4hvaalxid7931d2370ksms2ak0vgjrajmjkxquncigck == \8\k\o\7\0\w\a\6\k\3\1\n\d\1\y\7\b\h\y\6\r\n\6\1\6\y\5\r\e\0\y\z\i\h\m\f\y\d\r\y\m\i\n\5\y\4\s\d\0\2\l\6\b\s\1\b\w\h\s\g\s\k\4\e\h\r\7\o\i\5\x\k\y\0\j\i\8\b\v\w\g\0\u\h\o\1\w\a\8\0\l\5\s\g\6\p\o\8\k\d\c\i\x\1\t\k\6\6\p\l\8\s\i\2\9\6\6\e\d\g\0\4\b\7\p\t\f\7\b\s\l\l\y\1\7\0\h\z\q\c\r\9\t\q\i\2\w\g\s\c\k\e\u\a\k\y\v\v\o\v\i\y\6\g\q\4\y\g\r\c\r\a\g\8\j\d\i\8\h\5\z\o\u\d\f\f\q\t\g\r\m\7\j\3\r\q\9\s\r\9\a\c\2\g\y\b\w\w\z\p\9\d\i\y\h\e\y\k\j\h\u\x\w\i\v\a\j\6\a\z\a\s\7\1\c\t\p\7\5\f\9\2\8\w\w\a\9\e\0\d\9\j\0\f\z\e\3\u\v\o\6\p\w\m\v\p\t\7\o\o\h\n\z\3\l\d\6\d\d\6\s\j\s\o\0\a\7\s\4\4\m\p\1\k\9\c\n\h\7\i\k\o\t\8\s\v\m\l\a\g\d\e\h\k\k\u\9\f\k\b\2\0\3\0\j\e\d\j\d\2\j\w\d\a\g\z\t\z\u\2\c\t\d\x\k\z\2\y\x\q\n\q\b\4\q\f\p\f\c\a\f\x\s\x\k\u\7\t\g\k\n\u\x\l\y\h\a\k\o\t\b\r\c\0\n\s\0\f\7\e\q\w\w\f\s\d\6\r\1\2\n\u\b\a\3\f\e\x\t\o\g\l\q\3\b\e\v\b\2\h\4\0\z\5\h\e\f\w\l\c\r\7\a\d\y\7\p\w\x\p\d\6\u\4\i\y\3\5\e\6\p\p\0\v\t\k\q\k\l\s\1\y\m\b\l\q\c\z\4\h\v\a\a\l\x\i\d\7\9\3\1\d\2\3\7\0\k\s\m\s\2\a\k\0\v\g\j\r\a\j\m\j\k\x\q\u\n\c\i\g\c\k ]] 00:11:09.464 13:16:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:09.464 13:16:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:09.464 [2024-09-27 13:16:11.132234] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:09.464 [2024-09-27 13:16:11.132350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60517 ] 00:11:09.464 [2024-09-27 13:16:11.268734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.723 [2024-09-27 13:16:11.326470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.723 [2024-09-27 13:16:11.356154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.723  Copying: 512/512 [B] (average 500 kBps) 00:11:09.723 00:11:09.723 13:16:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8ko70wa6k31nd1y7bhy6rn616y5re0yzihmfydrymin5y4sd02l6bs1bwhsgsk4ehr7oi5xky0ji8bvwg0uho1wa80l5sg6po8kdcix1tk66pl8si2966edg04b7ptf7bslly170hzqcr9tqi2wgsckeuakyvvoviy6gq4ygrcrag8jdi8h5zoudffqtgrm7j3rq9sr9ac2gybwwzp9diyheykjhuxwivaj6azas71ctp75f928wwa9e0d9j0fze3uvo6pwmvpt7oohnz3ld6dd6sjso0a7s44mp1k9cnh7ikot8svmlagdehkku9fkb2030jedjd2jwdagztzu2ctdxkz2yxqnqb4qfpfcafxsxku7tgknuxlyhakotbrc0ns0f7eqwwfsd6r12nuba3fextoglq3bevb2h40z5hefwlcr7ady7pwxpd6u4iy35e6pp0vtkqkls1ymblqcz4hvaalxid7931d2370ksms2ak0vgjrajmjkxquncigck == \8\k\o\7\0\w\a\6\k\3\1\n\d\1\y\7\b\h\y\6\r\n\6\1\6\y\5\r\e\0\y\z\i\h\m\f\y\d\r\y\m\i\n\5\y\4\s\d\0\2\l\6\b\s\1\b\w\h\s\g\s\k\4\e\h\r\7\o\i\5\x\k\y\0\j\i\8\b\v\w\g\0\u\h\o\1\w\a\8\0\l\5\s\g\6\p\o\8\k\d\c\i\x\1\t\k\6\6\p\l\8\s\i\2\9\6\6\e\d\g\0\4\b\7\p\t\f\7\b\s\l\l\y\1\7\0\h\z\q\c\r\9\t\q\i\2\w\g\s\c\k\e\u\a\k\y\v\v\o\v\i\y\6\g\q\4\y\g\r\c\r\a\g\8\j\d\i\8\h\5\z\o\u\d\f\f\q\t\g\r\m\7\j\3\r\q\9\s\r\9\a\c\2\g\y\b\w\w\z\p\9\d\i\y\h\e\y\k\j\h\u\x\w\i\v\a\j\6\a\z\a\s\7\1\c\t\p\7\5\f\9\2\8\w\w\a\9\e\0\d\9\j\0\f\z\e\3\u\v\o\6\p\w\m\v\p\t\7\o\o\h\n\z\3\l\d\6\d\d\6\s\j\s\o\0\a\7\s\4\4\m\p\1\k\9\c\n\h\7\i\k\o\t\8\s\v\m\l\a\g\d\e\h\k\k\u\9\f\k\b\2\0\3\0\j\e\d\j\d\2\j\w\d\a\g\z\t\z\u\2\c\t\d\x\k\z\2\y\x\q\n\q\b\4\q\f\p\f\c\a\f\x\s\x\k\u\7\t\g\k\n\u\x\l\y\h\a\k\o\t\b\r\c\0\n\s\0\f\7\e\q\w\w\f\s\d\6\r\1\2\n\u\b\a\3\f\e\x\t\o\g\l\q\3\b\e\v\b\2\h\4\0\z\5\h\e\f\w\l\c\r\7\a\d\y\7\p\w\x\p\d\6\u\4\i\y\3\5\e\6\p\p\0\v\t\k\q\k\l\s\1\y\m\b\l\q\c\z\4\h\v\a\a\l\x\i\d\7\9\3\1\d\2\3\7\0\k\s\m\s\2\a\k\0\v\g\j\r\a\j\m\j\k\x\q\u\n\c\i\g\c\k ]] 00:11:09.723 00:11:09.723 real 0m4.014s 00:11:09.723 user 0m2.192s 00:11:09.723 sys 0m0.818s 00:11:09.723 13:16:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.723 13:16:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:09.723 ************************************ 00:11:09.723 END TEST dd_flags_misc_forced_aio 00:11:09.723 ************************************ 00:11:09.982 13:16:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:11:09.982 13:16:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:09.982 13:16:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:09.982 00:11:09.982 real 0m18.091s 00:11:09.982 user 0m8.685s 00:11:09.982 sys 0m4.673s 00:11:09.982 13:16:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.982 ************************************ 00:11:09.982 END TEST spdk_dd_posix 00:11:09.982 ************************************ 00:11:09.982 13:16:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:09.982 13:16:11 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:09.982 13:16:11 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:09.982 13:16:11 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.982 13:16:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:09.982 ************************************ 00:11:09.982 START TEST spdk_dd_malloc 00:11:09.982 ************************************ 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:09.982 * Looking for test storage... 00:11:09.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:11:09.982 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.983 --rc genhtml_branch_coverage=1 00:11:09.983 --rc genhtml_function_coverage=1 00:11:09.983 --rc genhtml_legend=1 00:11:09.983 --rc geninfo_all_blocks=1 00:11:09.983 --rc geninfo_unexecuted_blocks=1 00:11:09.983 00:11:09.983 ' 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.983 --rc genhtml_branch_coverage=1 00:11:09.983 --rc genhtml_function_coverage=1 00:11:09.983 --rc genhtml_legend=1 00:11:09.983 --rc geninfo_all_blocks=1 00:11:09.983 --rc geninfo_unexecuted_blocks=1 00:11:09.983 00:11:09.983 ' 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.983 --rc genhtml_branch_coverage=1 00:11:09.983 --rc genhtml_function_coverage=1 00:11:09.983 --rc genhtml_legend=1 00:11:09.983 --rc geninfo_all_blocks=1 00:11:09.983 --rc geninfo_unexecuted_blocks=1 00:11:09.983 00:11:09.983 ' 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.983 --rc genhtml_branch_coverage=1 00:11:09.983 --rc genhtml_function_coverage=1 00:11:09.983 --rc genhtml_legend=1 00:11:09.983 --rc geninfo_all_blocks=1 00:11:09.983 --rc geninfo_unexecuted_blocks=1 00:11:09.983 00:11:09.983 ' 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.983 13:16:11 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:11:10.242 13:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:10.242 13:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.243 13:16:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:10.243 ************************************ 00:11:10.243 START TEST dd_malloc_copy 00:11:10.243 ************************************ 00:11:10.243 13:16:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:11:10.243 13:16:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:11:10.243 13:16:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:11:10.243 13:16:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:10.243 13:16:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:11:10.243 13:16:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:11:10.243 13:16:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:11:10.243 13:16:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:11:10.243 13:16:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:11:10.243 13:16:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:10.243 13:16:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:10.243 { 00:11:10.243 "subsystems": [ 00:11:10.243 { 00:11:10.243 "subsystem": "bdev", 00:11:10.243 "config": [ 00:11:10.243 { 00:11:10.243 "params": { 00:11:10.243 "block_size": 512, 00:11:10.243 "num_blocks": 1048576, 00:11:10.243 "name": "malloc0" 00:11:10.243 }, 00:11:10.243 "method": "bdev_malloc_create" 00:11:10.243 }, 00:11:10.243 { 00:11:10.243 "params": { 00:11:10.243 "block_size": 512, 00:11:10.243 "num_blocks": 1048576, 00:11:10.243 "name": "malloc1" 00:11:10.243 }, 00:11:10.243 "method": "bdev_malloc_create" 00:11:10.243 }, 00:11:10.243 { 00:11:10.243 "method": "bdev_wait_for_examine" 00:11:10.243 } 00:11:10.243 ] 00:11:10.243 } 00:11:10.243 ] 00:11:10.243 } 00:11:10.243 [2024-09-27 13:16:11.895889] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:10.243 [2024-09-27 13:16:11.895982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60593 ] 00:11:10.243 [2024-09-27 13:16:12.032567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.502 [2024-09-27 13:16:12.092475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.502 [2024-09-27 13:16:12.124478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:13.641  Copying: 198/512 [MB] (198 MBps) Copying: 393/512 [MB] (195 MBps) Copying: 512/512 [MB] (average 197 MBps) 00:11:13.641 00:11:13.641 13:16:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:11:13.641 13:16:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:11:13.641 13:16:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:13.641 13:16:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:13.641 [2024-09-27 13:16:15.347141] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:13.641 [2024-09-27 13:16:15.347240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60641 ] 00:11:13.641 { 00:11:13.641 "subsystems": [ 00:11:13.641 { 00:11:13.641 "subsystem": "bdev", 00:11:13.641 "config": [ 00:11:13.641 { 00:11:13.641 "params": { 00:11:13.641 "block_size": 512, 00:11:13.641 "num_blocks": 1048576, 00:11:13.641 "name": "malloc0" 00:11:13.641 }, 00:11:13.641 "method": "bdev_malloc_create" 00:11:13.641 }, 00:11:13.641 { 00:11:13.641 "params": { 00:11:13.641 "block_size": 512, 00:11:13.641 "num_blocks": 1048576, 00:11:13.641 "name": "malloc1" 00:11:13.641 }, 00:11:13.641 "method": "bdev_malloc_create" 00:11:13.641 }, 00:11:13.641 { 00:11:13.641 "method": "bdev_wait_for_examine" 00:11:13.641 } 00:11:13.641 ] 00:11:13.641 } 00:11:13.641 ] 00:11:13.641 } 00:11:13.641 [2024-09-27 13:16:15.485633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.899 [2024-09-27 13:16:15.546605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.900 [2024-09-27 13:16:15.577858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:17.079  Copying: 207/512 [MB] (207 MBps) Copying: 415/512 [MB] (208 MBps) Copying: 512/512 [MB] (average 204 MBps) 00:11:17.079 00:11:17.079 00:11:17.079 real 0m6.816s 00:11:17.079 user 0m6.141s 00:11:17.079 sys 0m0.519s 00:11:17.079 13:16:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.079 ************************************ 00:11:17.079 13:16:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:17.079 END TEST dd_malloc_copy 00:11:17.079 ************************************ 00:11:17.079 00:11:17.079 real 0m7.057s 00:11:17.079 user 0m6.280s 00:11:17.079 sys 0m0.623s 00:11:17.079 13:16:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.079 13:16:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:17.079 ************************************ 00:11:17.079 END TEST spdk_dd_malloc 00:11:17.079 ************************************ 00:11:17.079 13:16:18 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:17.079 13:16:18 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:17.079 13:16:18 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.079 13:16:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:17.079 ************************************ 00:11:17.079 START TEST spdk_dd_bdev_to_bdev 00:11:17.079 ************************************ 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:17.079 * Looking for test storage... 00:11:17.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.079 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:17.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.338 --rc genhtml_branch_coverage=1 00:11:17.338 --rc genhtml_function_coverage=1 00:11:17.338 --rc genhtml_legend=1 00:11:17.338 --rc geninfo_all_blocks=1 00:11:17.338 --rc geninfo_unexecuted_blocks=1 00:11:17.338 00:11:17.338 ' 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:17.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.338 --rc genhtml_branch_coverage=1 00:11:17.338 --rc genhtml_function_coverage=1 00:11:17.338 --rc genhtml_legend=1 00:11:17.338 --rc geninfo_all_blocks=1 00:11:17.338 --rc geninfo_unexecuted_blocks=1 00:11:17.338 00:11:17.338 ' 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:17.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.338 --rc genhtml_branch_coverage=1 00:11:17.338 --rc genhtml_function_coverage=1 00:11:17.338 --rc genhtml_legend=1 00:11:17.338 --rc geninfo_all_blocks=1 00:11:17.338 --rc geninfo_unexecuted_blocks=1 00:11:17.338 00:11:17.338 ' 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:17.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.338 --rc genhtml_branch_coverage=1 00:11:17.338 --rc genhtml_function_coverage=1 00:11:17.338 --rc genhtml_legend=1 00:11:17.338 --rc geninfo_all_blocks=1 00:11:17.338 --rc geninfo_unexecuted_blocks=1 00:11:17.338 00:11:17.338 ' 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.338 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:17.339 ************************************ 00:11:17.339 START TEST dd_inflate_file 00:11:17.339 ************************************ 00:11:17.339 13:16:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:17.339 [2024-09-27 13:16:18.993846] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:17.339 [2024-09-27 13:16:18.993958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60753 ] 00:11:17.339 [2024-09-27 13:16:19.130898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.598 [2024-09-27 13:16:19.203396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.598 [2024-09-27 13:16:19.238629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:17.857  Copying: 64/64 [MB] (average 1828 MBps) 00:11:17.857 00:11:17.857 00:11:17.857 real 0m0.503s 00:11:17.857 user 0m0.293s 00:11:17.857 sys 0m0.223s 00:11:17.857 13:16:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.857 13:16:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:11:17.857 ************************************ 00:11:17.857 END TEST dd_inflate_file 00:11:17.857 ************************************ 00:11:17.858 13:16:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:11:17.858 13:16:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:11:17.858 13:16:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:17.858 13:16:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:11:17.858 13:16:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:11:17.858 13:16:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.858 13:16:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:17.858 13:16:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:17.858 13:16:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:17.858 ************************************ 00:11:17.858 START TEST dd_copy_to_out_bdev 00:11:17.858 ************************************ 00:11:17.858 13:16:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:17.858 { 00:11:17.858 "subsystems": [ 00:11:17.858 { 00:11:17.858 "subsystem": "bdev", 00:11:17.858 "config": [ 00:11:17.858 { 00:11:17.858 "params": { 00:11:17.858 "trtype": "pcie", 00:11:17.858 "traddr": "0000:00:10.0", 00:11:17.858 "name": "Nvme0" 00:11:17.858 }, 00:11:17.858 "method": "bdev_nvme_attach_controller" 00:11:17.858 }, 00:11:17.858 { 00:11:17.858 "params": { 00:11:17.858 "trtype": "pcie", 00:11:17.858 "traddr": "0000:00:11.0", 00:11:17.858 "name": "Nvme1" 00:11:17.858 }, 00:11:17.858 "method": "bdev_nvme_attach_controller" 00:11:17.858 }, 00:11:17.858 { 00:11:17.858 "method": "bdev_wait_for_examine" 00:11:17.858 } 00:11:17.858 ] 00:11:17.858 } 00:11:17.858 ] 00:11:17.858 } 00:11:17.858 [2024-09-27 13:16:19.563290] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:17.858 [2024-09-27 13:16:19.563393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60787 ] 00:11:17.858 [2024-09-27 13:16:19.700647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.136 [2024-09-27 13:16:19.762907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.136 [2024-09-27 13:16:19.792530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:19.512  Copying: 57/64 [MB] (57 MBps) Copying: 64/64 [MB] (average 57 MBps) 00:11:19.512 00:11:19.512 00:11:19.512 real 0m1.751s 00:11:19.512 user 0m1.560s 00:11:19.512 sys 0m1.376s 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:19.512 ************************************ 00:11:19.512 END TEST dd_copy_to_out_bdev 00:11:19.512 ************************************ 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:19.512 ************************************ 00:11:19.512 START TEST dd_offset_magic 00:11:19.512 ************************************ 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:19.512 13:16:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:19.771 [2024-09-27 13:16:21.364280] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:19.771 [2024-09-27 13:16:21.364363] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60832 ] 00:11:19.771 { 00:11:19.771 "subsystems": [ 00:11:19.771 { 00:11:19.771 "subsystem": "bdev", 00:11:19.771 "config": [ 00:11:19.771 { 00:11:19.771 "params": { 00:11:19.771 "trtype": "pcie", 00:11:19.771 "traddr": "0000:00:10.0", 00:11:19.771 "name": "Nvme0" 00:11:19.771 }, 00:11:19.771 "method": "bdev_nvme_attach_controller" 00:11:19.771 }, 00:11:19.771 { 00:11:19.771 "params": { 00:11:19.771 "trtype": "pcie", 00:11:19.771 "traddr": "0000:00:11.0", 00:11:19.771 "name": "Nvme1" 00:11:19.771 }, 00:11:19.771 "method": "bdev_nvme_attach_controller" 00:11:19.771 }, 00:11:19.771 { 00:11:19.771 "method": "bdev_wait_for_examine" 00:11:19.771 } 00:11:19.771 ] 00:11:19.771 } 00:11:19.771 ] 00:11:19.771 } 00:11:19.771 [2024-09-27 13:16:21.496052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.771 [2024-09-27 13:16:21.556975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.771 [2024-09-27 13:16:21.587821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:20.290  Copying: 65/65 [MB] (average 1048 MBps) 00:11:20.290 00:11:20.290 13:16:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:20.290 13:16:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:11:20.290 13:16:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:20.290 13:16:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:20.290 [2024-09-27 13:16:22.064188] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:20.290 [2024-09-27 13:16:22.064372] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60842 ] 00:11:20.290 { 00:11:20.290 "subsystems": [ 00:11:20.290 { 00:11:20.290 "subsystem": "bdev", 00:11:20.290 "config": [ 00:11:20.290 { 00:11:20.290 "params": { 00:11:20.290 "trtype": "pcie", 00:11:20.290 "traddr": "0000:00:10.0", 00:11:20.290 "name": "Nvme0" 00:11:20.290 }, 00:11:20.290 "method": "bdev_nvme_attach_controller" 00:11:20.290 }, 00:11:20.290 { 00:11:20.290 "params": { 00:11:20.290 "trtype": "pcie", 00:11:20.290 "traddr": "0000:00:11.0", 00:11:20.290 "name": "Nvme1" 00:11:20.290 }, 00:11:20.290 "method": "bdev_nvme_attach_controller" 00:11:20.290 }, 00:11:20.290 { 00:11:20.291 "method": "bdev_wait_for_examine" 00:11:20.291 } 00:11:20.291 ] 00:11:20.291 } 00:11:20.291 ] 00:11:20.291 } 00:11:20.549 [2024-09-27 13:16:22.200149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.549 [2024-09-27 13:16:22.259212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.549 [2024-09-27 13:16:22.290024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:20.807  Copying: 1024/1024 [kB] (average 500 MBps) 00:11:20.807 00:11:20.808 13:16:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:20.808 13:16:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:20.808 13:16:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:20.808 13:16:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:11:20.808 13:16:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:20.808 13:16:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:20.808 13:16:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:21.067 [2024-09-27 13:16:22.673153] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:21.067 [2024-09-27 13:16:22.673288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60863 ] 00:11:21.067 { 00:11:21.067 "subsystems": [ 00:11:21.067 { 00:11:21.067 "subsystem": "bdev", 00:11:21.067 "config": [ 00:11:21.067 { 00:11:21.067 "params": { 00:11:21.067 "trtype": "pcie", 00:11:21.067 "traddr": "0000:00:10.0", 00:11:21.067 "name": "Nvme0" 00:11:21.067 }, 00:11:21.067 "method": "bdev_nvme_attach_controller" 00:11:21.067 }, 00:11:21.067 { 00:11:21.067 "params": { 00:11:21.067 "trtype": "pcie", 00:11:21.067 "traddr": "0000:00:11.0", 00:11:21.067 "name": "Nvme1" 00:11:21.067 }, 00:11:21.067 "method": "bdev_nvme_attach_controller" 00:11:21.067 }, 00:11:21.067 { 00:11:21.067 "method": "bdev_wait_for_examine" 00:11:21.067 } 00:11:21.067 ] 00:11:21.067 } 00:11:21.067 ] 00:11:21.067 } 00:11:21.067 [2024-09-27 13:16:22.808246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.067 [2024-09-27 13:16:22.866653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.067 [2024-09-27 13:16:22.896684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:21.586  Copying: 65/65 [MB] (average 1226 MBps) 00:11:21.586 00:11:21.586 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:11:21.586 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:21.586 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:21.586 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:21.586 [2024-09-27 13:16:23.369162] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:21.586 [2024-09-27 13:16:23.369315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60883 ] 00:11:21.586 { 00:11:21.586 "subsystems": [ 00:11:21.586 { 00:11:21.586 "subsystem": "bdev", 00:11:21.586 "config": [ 00:11:21.586 { 00:11:21.586 "params": { 00:11:21.586 "trtype": "pcie", 00:11:21.586 "traddr": "0000:00:10.0", 00:11:21.586 "name": "Nvme0" 00:11:21.586 }, 00:11:21.586 "method": "bdev_nvme_attach_controller" 00:11:21.586 }, 00:11:21.586 { 00:11:21.586 "params": { 00:11:21.586 "trtype": "pcie", 00:11:21.586 "traddr": "0000:00:11.0", 00:11:21.586 "name": "Nvme1" 00:11:21.586 }, 00:11:21.586 "method": "bdev_nvme_attach_controller" 00:11:21.586 }, 00:11:21.586 { 00:11:21.586 "method": "bdev_wait_for_examine" 00:11:21.586 } 00:11:21.586 ] 00:11:21.586 } 00:11:21.586 ] 00:11:21.586 } 00:11:21.846 [2024-09-27 13:16:23.507420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.846 [2024-09-27 13:16:23.562730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.846 [2024-09-27 13:16:23.593049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:22.106  Copying: 1024/1024 [kB] (average 500 MBps) 00:11:22.106 00:11:22.106 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:22.106 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:22.106 00:11:22.106 real 0m2.618s 00:11:22.106 user 0m1.964s 00:11:22.106 sys 0m0.667s 00:11:22.106 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.106 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:22.106 ************************************ 00:11:22.106 END TEST dd_offset_magic 00:11:22.106 ************************************ 00:11:22.365 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:11:22.365 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:11:22.365 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:22.365 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:22.365 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:22.365 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:22.365 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:22.365 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:11:22.365 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:22.365 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:22.365 13:16:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:22.365 [2024-09-27 13:16:24.031373] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:22.365 [2024-09-27 13:16:24.031464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60909 ] 00:11:22.365 { 00:11:22.365 "subsystems": [ 00:11:22.365 { 00:11:22.365 "subsystem": "bdev", 00:11:22.365 "config": [ 00:11:22.365 { 00:11:22.365 "params": { 00:11:22.365 "trtype": "pcie", 00:11:22.365 "traddr": "0000:00:10.0", 00:11:22.365 "name": "Nvme0" 00:11:22.365 }, 00:11:22.365 "method": "bdev_nvme_attach_controller" 00:11:22.365 }, 00:11:22.365 { 00:11:22.365 "params": { 00:11:22.365 "trtype": "pcie", 00:11:22.365 "traddr": "0000:00:11.0", 00:11:22.365 "name": "Nvme1" 00:11:22.365 }, 00:11:22.365 "method": "bdev_nvme_attach_controller" 00:11:22.365 }, 00:11:22.365 { 00:11:22.365 "method": "bdev_wait_for_examine" 00:11:22.365 } 00:11:22.365 ] 00:11:22.365 } 00:11:22.365 ] 00:11:22.365 } 00:11:22.365 [2024-09-27 13:16:24.164475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.624 [2024-09-27 13:16:24.229152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.624 [2024-09-27 13:16:24.261556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:22.882  Copying: 5120/5120 [kB] (average 1250 MBps) 00:11:22.882 00:11:22.882 13:16:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:11:22.882 13:16:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:11:22.882 13:16:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:22.882 13:16:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:22.882 13:16:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:22.882 13:16:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:22.882 13:16:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:11:22.882 13:16:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:22.882 13:16:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:22.882 13:16:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:22.882 [2024-09-27 13:16:24.656321] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:22.882 { 00:11:22.882 "subsystems": [ 00:11:22.882 { 00:11:22.882 "subsystem": "bdev", 00:11:22.882 "config": [ 00:11:22.882 { 00:11:22.882 "params": { 00:11:22.882 "trtype": "pcie", 00:11:22.882 "traddr": "0000:00:10.0", 00:11:22.882 "name": "Nvme0" 00:11:22.882 }, 00:11:22.882 "method": "bdev_nvme_attach_controller" 00:11:22.882 }, 00:11:22.882 { 00:11:22.882 "params": { 00:11:22.882 "trtype": "pcie", 00:11:22.882 "traddr": "0000:00:11.0", 00:11:22.882 "name": "Nvme1" 00:11:22.882 }, 00:11:22.882 "method": "bdev_nvme_attach_controller" 00:11:22.882 }, 00:11:22.882 { 00:11:22.882 "method": "bdev_wait_for_examine" 00:11:22.882 } 00:11:22.882 ] 00:11:22.882 } 00:11:22.882 ] 00:11:22.882 } 00:11:22.882 [2024-09-27 13:16:24.656417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60930 ] 00:11:23.140 [2024-09-27 13:16:24.791776] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.140 [2024-09-27 13:16:24.851595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.140 [2024-09-27 13:16:24.883971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:23.398  Copying: 5120/5120 [kB] (average 833 MBps) 00:11:23.398 00:11:23.398 13:16:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:11:23.398 00:11:23.398 real 0m6.495s 00:11:23.398 user 0m4.916s 00:11:23.398 sys 0m2.842s 00:11:23.398 13:16:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.657 ************************************ 00:11:23.657 END TEST spdk_dd_bdev_to_bdev 00:11:23.657 ************************************ 00:11:23.657 13:16:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:23.657 13:16:25 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:11:23.657 13:16:25 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:23.657 13:16:25 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:23.657 13:16:25 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.657 13:16:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:23.657 ************************************ 00:11:23.657 START TEST spdk_dd_uring 00:11:23.657 ************************************ 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:23.657 * Looking for test storage... 00:11:23.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:23.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.657 --rc genhtml_branch_coverage=1 00:11:23.657 --rc genhtml_function_coverage=1 00:11:23.657 --rc genhtml_legend=1 00:11:23.657 --rc geninfo_all_blocks=1 00:11:23.657 --rc geninfo_unexecuted_blocks=1 00:11:23.657 00:11:23.657 ' 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:23.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.657 --rc genhtml_branch_coverage=1 00:11:23.657 --rc genhtml_function_coverage=1 00:11:23.657 --rc genhtml_legend=1 00:11:23.657 --rc geninfo_all_blocks=1 00:11:23.657 --rc geninfo_unexecuted_blocks=1 00:11:23.657 00:11:23.657 ' 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:23.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.657 --rc genhtml_branch_coverage=1 00:11:23.657 --rc genhtml_function_coverage=1 00:11:23.657 --rc genhtml_legend=1 00:11:23.657 --rc geninfo_all_blocks=1 00:11:23.657 --rc geninfo_unexecuted_blocks=1 00:11:23.657 00:11:23.657 ' 00:11:23.657 13:16:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:23.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.657 --rc genhtml_branch_coverage=1 00:11:23.657 --rc genhtml_function_coverage=1 00:11:23.657 --rc genhtml_legend=1 00:11:23.657 --rc geninfo_all_blocks=1 00:11:23.658 --rc geninfo_unexecuted_blocks=1 00:11:23.658 00:11:23.658 ' 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.658 13:16:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:23.917 ************************************ 00:11:23.917 START TEST dd_uring_copy 00:11:23.917 ************************************ 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=s16phi0raort0ov1hdgw06tkkaab3luiq1mdyphxt88xxk32bkn8qn6yl3vrs9jr1drhs6w4xfysgomds1e1fwmlxul2qkbrm4rmrcfzjo46zv4ip90eilo97ajvtvfokbyw1nv942tpyjt5o884522t7k1jy13imqd0c15ovio5nbsm3gc4dn05wjxlu8759xw1s16t025d2lcs51hzi5u4t5ggbthbgsrdllf74a3prrmm2dqirlimxp9s9wr2d34hc37wrm61534mi4vx4vj5zlaixnjsei844ag9d8o84gs5cuwpv013uva37v6h4swcydoeg5fode2ccemoj03sdsf8felvgb1dhabavrcy2v9vbf4efm5ltdafocuh8dmks8itdvw3e1x1euk2ak1rq78wp53de2ikbnopz23qme9d80t0h0r00fu4jbj5s04cggqaijj029t9ce8m5nkpsoysmir19l504xt8v93dvrwkfxigdmsa4fq680t5zopqfa88iobrc3xd1pzy64lmn203kevnd15lgegprrvlw9ke4qplk7o0mbvl6omjp3g3f8mmaforuvtb2r9c3c03c15i1omqet5c7rmypi2s4vilshu98ln2b4i537xp8hmg2w037so2akex4d65f268wijzkv73nkzg0q2s8914kpeu7gkggw01e48t4qn7lfcgfqh590h3kius9r9hry78vnr3uivz6e3apb4sbnlj23guogz2uoh34aws0figa2zsg2zx1m3pgo06t19648oiacx5y58bobhork8iinn3nd10f1yltdvhc7n7chp9hn8k37tbw5g2lg7m8rm4fi49a2flznkai0ybgv9ncojq8wlwmjr3h1zjok6seyjzljh60ltoxsitswzsjoy7rqdnqpy6scfp5cjcy649k21okrze0rtd9na1hx9zefrs7n70j1s35aqf9hy8w2rnb1jowqnn207vsykum3wqrjddyoec1xffk3sjfm760p3v 00:11:23.917 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo s16phi0raort0ov1hdgw06tkkaab3luiq1mdyphxt88xxk32bkn8qn6yl3vrs9jr1drhs6w4xfysgomds1e1fwmlxul2qkbrm4rmrcfzjo46zv4ip90eilo97ajvtvfokbyw1nv942tpyjt5o884522t7k1jy13imqd0c15ovio5nbsm3gc4dn05wjxlu8759xw1s16t025d2lcs51hzi5u4t5ggbthbgsrdllf74a3prrmm2dqirlimxp9s9wr2d34hc37wrm61534mi4vx4vj5zlaixnjsei844ag9d8o84gs5cuwpv013uva37v6h4swcydoeg5fode2ccemoj03sdsf8felvgb1dhabavrcy2v9vbf4efm5ltdafocuh8dmks8itdvw3e1x1euk2ak1rq78wp53de2ikbnopz23qme9d80t0h0r00fu4jbj5s04cggqaijj029t9ce8m5nkpsoysmir19l504xt8v93dvrwkfxigdmsa4fq680t5zopqfa88iobrc3xd1pzy64lmn203kevnd15lgegprrvlw9ke4qplk7o0mbvl6omjp3g3f8mmaforuvtb2r9c3c03c15i1omqet5c7rmypi2s4vilshu98ln2b4i537xp8hmg2w037so2akex4d65f268wijzkv73nkzg0q2s8914kpeu7gkggw01e48t4qn7lfcgfqh590h3kius9r9hry78vnr3uivz6e3apb4sbnlj23guogz2uoh34aws0figa2zsg2zx1m3pgo06t19648oiacx5y58bobhork8iinn3nd10f1yltdvhc7n7chp9hn8k37tbw5g2lg7m8rm4fi49a2flznkai0ybgv9ncojq8wlwmjr3h1zjok6seyjzljh60ltoxsitswzsjoy7rqdnqpy6scfp5cjcy649k21okrze0rtd9na1hx9zefrs7n70j1s35aqf9hy8w2rnb1jowqnn207vsykum3wqrjddyoec1xffk3sjfm760p3v 00:11:23.918 13:16:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:11:23.918 [2024-09-27 13:16:25.592499] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:23.918 [2024-09-27 13:16:25.592628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61009 ] 00:11:23.918 [2024-09-27 13:16:25.736783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.175 [2024-09-27 13:16:25.826290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.175 [2024-09-27 13:16:25.865194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:24.999  Copying: 511/511 [MB] (average 1201 MBps) 00:11:24.999 00:11:24.999 13:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:11:24.999 13:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:11:24.999 13:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:24.999 13:16:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:24.999 [2024-09-27 13:16:26.758438] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:24.999 [2024-09-27 13:16:26.758537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61027 ] 00:11:24.999 { 00:11:24.999 "subsystems": [ 00:11:24.999 { 00:11:24.999 "subsystem": "bdev", 00:11:24.999 "config": [ 00:11:24.999 { 00:11:24.999 "params": { 00:11:24.999 "block_size": 512, 00:11:24.999 "num_blocks": 1048576, 00:11:24.999 "name": "malloc0" 00:11:24.999 }, 00:11:24.999 "method": "bdev_malloc_create" 00:11:24.999 }, 00:11:24.999 { 00:11:24.999 "params": { 00:11:24.999 "filename": "/dev/zram1", 00:11:24.999 "name": "uring0" 00:11:24.999 }, 00:11:24.999 "method": "bdev_uring_create" 00:11:24.999 }, 00:11:24.999 { 00:11:24.999 "method": "bdev_wait_for_examine" 00:11:24.999 } 00:11:24.999 ] 00:11:24.999 } 00:11:24.999 ] 00:11:24.999 } 00:11:25.258 [2024-09-27 13:16:26.896518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.258 [2024-09-27 13:16:26.955266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.258 [2024-09-27 13:16:26.986665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:28.087  Copying: 215/512 [MB] (215 MBps) Copying: 433/512 [MB] (217 MBps) Copying: 512/512 [MB] (average 216 MBps) 00:11:28.087 00:11:28.087 13:16:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:11:28.087 13:16:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:11:28.087 13:16:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:28.087 13:16:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:28.087 [2024-09-27 13:16:29.789271] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:28.087 [2024-09-27 13:16:29.789374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61071 ] 00:11:28.087 { 00:11:28.087 "subsystems": [ 00:11:28.087 { 00:11:28.087 "subsystem": "bdev", 00:11:28.087 "config": [ 00:11:28.087 { 00:11:28.087 "params": { 00:11:28.087 "block_size": 512, 00:11:28.087 "num_blocks": 1048576, 00:11:28.087 "name": "malloc0" 00:11:28.087 }, 00:11:28.087 "method": "bdev_malloc_create" 00:11:28.087 }, 00:11:28.087 { 00:11:28.087 "params": { 00:11:28.087 "filename": "/dev/zram1", 00:11:28.087 "name": "uring0" 00:11:28.087 }, 00:11:28.087 "method": "bdev_uring_create" 00:11:28.087 }, 00:11:28.087 { 00:11:28.087 "method": "bdev_wait_for_examine" 00:11:28.087 } 00:11:28.087 ] 00:11:28.087 } 00:11:28.087 ] 00:11:28.087 } 00:11:28.087 [2024-09-27 13:16:29.928653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.345 [2024-09-27 13:16:29.985421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.345 [2024-09-27 13:16:30.016768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:31.859  Copying: 179/512 [MB] (179 MBps) Copying: 347/512 [MB] (167 MBps) Copying: 502/512 [MB] (155 MBps) Copying: 512/512 [MB] (average 167 MBps) 00:11:31.859 00:11:31.859 13:16:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:11:31.859 13:16:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ s16phi0raort0ov1hdgw06tkkaab3luiq1mdyphxt88xxk32bkn8qn6yl3vrs9jr1drhs6w4xfysgomds1e1fwmlxul2qkbrm4rmrcfzjo46zv4ip90eilo97ajvtvfokbyw1nv942tpyjt5o884522t7k1jy13imqd0c15ovio5nbsm3gc4dn05wjxlu8759xw1s16t025d2lcs51hzi5u4t5ggbthbgsrdllf74a3prrmm2dqirlimxp9s9wr2d34hc37wrm61534mi4vx4vj5zlaixnjsei844ag9d8o84gs5cuwpv013uva37v6h4swcydoeg5fode2ccemoj03sdsf8felvgb1dhabavrcy2v9vbf4efm5ltdafocuh8dmks8itdvw3e1x1euk2ak1rq78wp53de2ikbnopz23qme9d80t0h0r00fu4jbj5s04cggqaijj029t9ce8m5nkpsoysmir19l504xt8v93dvrwkfxigdmsa4fq680t5zopqfa88iobrc3xd1pzy64lmn203kevnd15lgegprrvlw9ke4qplk7o0mbvl6omjp3g3f8mmaforuvtb2r9c3c03c15i1omqet5c7rmypi2s4vilshu98ln2b4i537xp8hmg2w037so2akex4d65f268wijzkv73nkzg0q2s8914kpeu7gkggw01e48t4qn7lfcgfqh590h3kius9r9hry78vnr3uivz6e3apb4sbnlj23guogz2uoh34aws0figa2zsg2zx1m3pgo06t19648oiacx5y58bobhork8iinn3nd10f1yltdvhc7n7chp9hn8k37tbw5g2lg7m8rm4fi49a2flznkai0ybgv9ncojq8wlwmjr3h1zjok6seyjzljh60ltoxsitswzsjoy7rqdnqpy6scfp5cjcy649k21okrze0rtd9na1hx9zefrs7n70j1s35aqf9hy8w2rnb1jowqnn207vsykum3wqrjddyoec1xffk3sjfm760p3v == \s\1\6\p\h\i\0\r\a\o\r\t\0\o\v\1\h\d\g\w\0\6\t\k\k\a\a\b\3\l\u\i\q\1\m\d\y\p\h\x\t\8\8\x\x\k\3\2\b\k\n\8\q\n\6\y\l\3\v\r\s\9\j\r\1\d\r\h\s\6\w\4\x\f\y\s\g\o\m\d\s\1\e\1\f\w\m\l\x\u\l\2\q\k\b\r\m\4\r\m\r\c\f\z\j\o\4\6\z\v\4\i\p\9\0\e\i\l\o\9\7\a\j\v\t\v\f\o\k\b\y\w\1\n\v\9\4\2\t\p\y\j\t\5\o\8\8\4\5\2\2\t\7\k\1\j\y\1\3\i\m\q\d\0\c\1\5\o\v\i\o\5\n\b\s\m\3\g\c\4\d\n\0\5\w\j\x\l\u\8\7\5\9\x\w\1\s\1\6\t\0\2\5\d\2\l\c\s\5\1\h\z\i\5\u\4\t\5\g\g\b\t\h\b\g\s\r\d\l\l\f\7\4\a\3\p\r\r\m\m\2\d\q\i\r\l\i\m\x\p\9\s\9\w\r\2\d\3\4\h\c\3\7\w\r\m\6\1\5\3\4\m\i\4\v\x\4\v\j\5\z\l\a\i\x\n\j\s\e\i\8\4\4\a\g\9\d\8\o\8\4\g\s\5\c\u\w\p\v\0\1\3\u\v\a\3\7\v\6\h\4\s\w\c\y\d\o\e\g\5\f\o\d\e\2\c\c\e\m\o\j\0\3\s\d\s\f\8\f\e\l\v\g\b\1\d\h\a\b\a\v\r\c\y\2\v\9\v\b\f\4\e\f\m\5\l\t\d\a\f\o\c\u\h\8\d\m\k\s\8\i\t\d\v\w\3\e\1\x\1\e\u\k\2\a\k\1\r\q\7\8\w\p\5\3\d\e\2\i\k\b\n\o\p\z\2\3\q\m\e\9\d\8\0\t\0\h\0\r\0\0\f\u\4\j\b\j\5\s\0\4\c\g\g\q\a\i\j\j\0\2\9\t\9\c\e\8\m\5\n\k\p\s\o\y\s\m\i\r\1\9\l\5\0\4\x\t\8\v\9\3\d\v\r\w\k\f\x\i\g\d\m\s\a\4\f\q\6\8\0\t\5\z\o\p\q\f\a\8\8\i\o\b\r\c\3\x\d\1\p\z\y\6\4\l\m\n\2\0\3\k\e\v\n\d\1\5\l\g\e\g\p\r\r\v\l\w\9\k\e\4\q\p\l\k\7\o\0\m\b\v\l\6\o\m\j\p\3\g\3\f\8\m\m\a\f\o\r\u\v\t\b\2\r\9\c\3\c\0\3\c\1\5\i\1\o\m\q\e\t\5\c\7\r\m\y\p\i\2\s\4\v\i\l\s\h\u\9\8\l\n\2\b\4\i\5\3\7\x\p\8\h\m\g\2\w\0\3\7\s\o\2\a\k\e\x\4\d\6\5\f\2\6\8\w\i\j\z\k\v\7\3\n\k\z\g\0\q\2\s\8\9\1\4\k\p\e\u\7\g\k\g\g\w\0\1\e\4\8\t\4\q\n\7\l\f\c\g\f\q\h\5\9\0\h\3\k\i\u\s\9\r\9\h\r\y\7\8\v\n\r\3\u\i\v\z\6\e\3\a\p\b\4\s\b\n\l\j\2\3\g\u\o\g\z\2\u\o\h\3\4\a\w\s\0\f\i\g\a\2\z\s\g\2\z\x\1\m\3\p\g\o\0\6\t\1\9\6\4\8\o\i\a\c\x\5\y\5\8\b\o\b\h\o\r\k\8\i\i\n\n\3\n\d\1\0\f\1\y\l\t\d\v\h\c\7\n\7\c\h\p\9\h\n\8\k\3\7\t\b\w\5\g\2\l\g\7\m\8\r\m\4\f\i\4\9\a\2\f\l\z\n\k\a\i\0\y\b\g\v\9\n\c\o\j\q\8\w\l\w\m\j\r\3\h\1\z\j\o\k\6\s\e\y\j\z\l\j\h\6\0\l\t\o\x\s\i\t\s\w\z\s\j\o\y\7\r\q\d\n\q\p\y\6\s\c\f\p\5\c\j\c\y\6\4\9\k\2\1\o\k\r\z\e\0\r\t\d\9\n\a\1\h\x\9\z\e\f\r\s\7\n\7\0\j\1\s\3\5\a\q\f\9\h\y\8\w\2\r\n\b\1\j\o\w\q\n\n\2\0\7\v\s\y\k\u\m\3\w\q\r\j\d\d\y\o\e\c\1\x\f\f\k\3\s\j\f\m\7\6\0\p\3\v ]] 00:11:31.859 13:16:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:11:31.859 13:16:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ s16phi0raort0ov1hdgw06tkkaab3luiq1mdyphxt88xxk32bkn8qn6yl3vrs9jr1drhs6w4xfysgomds1e1fwmlxul2qkbrm4rmrcfzjo46zv4ip90eilo97ajvtvfokbyw1nv942tpyjt5o884522t7k1jy13imqd0c15ovio5nbsm3gc4dn05wjxlu8759xw1s16t025d2lcs51hzi5u4t5ggbthbgsrdllf74a3prrmm2dqirlimxp9s9wr2d34hc37wrm61534mi4vx4vj5zlaixnjsei844ag9d8o84gs5cuwpv013uva37v6h4swcydoeg5fode2ccemoj03sdsf8felvgb1dhabavrcy2v9vbf4efm5ltdafocuh8dmks8itdvw3e1x1euk2ak1rq78wp53de2ikbnopz23qme9d80t0h0r00fu4jbj5s04cggqaijj029t9ce8m5nkpsoysmir19l504xt8v93dvrwkfxigdmsa4fq680t5zopqfa88iobrc3xd1pzy64lmn203kevnd15lgegprrvlw9ke4qplk7o0mbvl6omjp3g3f8mmaforuvtb2r9c3c03c15i1omqet5c7rmypi2s4vilshu98ln2b4i537xp8hmg2w037so2akex4d65f268wijzkv73nkzg0q2s8914kpeu7gkggw01e48t4qn7lfcgfqh590h3kius9r9hry78vnr3uivz6e3apb4sbnlj23guogz2uoh34aws0figa2zsg2zx1m3pgo06t19648oiacx5y58bobhork8iinn3nd10f1yltdvhc7n7chp9hn8k37tbw5g2lg7m8rm4fi49a2flznkai0ybgv9ncojq8wlwmjr3h1zjok6seyjzljh60ltoxsitswzsjoy7rqdnqpy6scfp5cjcy649k21okrze0rtd9na1hx9zefrs7n70j1s35aqf9hy8w2rnb1jowqnn207vsykum3wqrjddyoec1xffk3sjfm760p3v == \s\1\6\p\h\i\0\r\a\o\r\t\0\o\v\1\h\d\g\w\0\6\t\k\k\a\a\b\3\l\u\i\q\1\m\d\y\p\h\x\t\8\8\x\x\k\3\2\b\k\n\8\q\n\6\y\l\3\v\r\s\9\j\r\1\d\r\h\s\6\w\4\x\f\y\s\g\o\m\d\s\1\e\1\f\w\m\l\x\u\l\2\q\k\b\r\m\4\r\m\r\c\f\z\j\o\4\6\z\v\4\i\p\9\0\e\i\l\o\9\7\a\j\v\t\v\f\o\k\b\y\w\1\n\v\9\4\2\t\p\y\j\t\5\o\8\8\4\5\2\2\t\7\k\1\j\y\1\3\i\m\q\d\0\c\1\5\o\v\i\o\5\n\b\s\m\3\g\c\4\d\n\0\5\w\j\x\l\u\8\7\5\9\x\w\1\s\1\6\t\0\2\5\d\2\l\c\s\5\1\h\z\i\5\u\4\t\5\g\g\b\t\h\b\g\s\r\d\l\l\f\7\4\a\3\p\r\r\m\m\2\d\q\i\r\l\i\m\x\p\9\s\9\w\r\2\d\3\4\h\c\3\7\w\r\m\6\1\5\3\4\m\i\4\v\x\4\v\j\5\z\l\a\i\x\n\j\s\e\i\8\4\4\a\g\9\d\8\o\8\4\g\s\5\c\u\w\p\v\0\1\3\u\v\a\3\7\v\6\h\4\s\w\c\y\d\o\e\g\5\f\o\d\e\2\c\c\e\m\o\j\0\3\s\d\s\f\8\f\e\l\v\g\b\1\d\h\a\b\a\v\r\c\y\2\v\9\v\b\f\4\e\f\m\5\l\t\d\a\f\o\c\u\h\8\d\m\k\s\8\i\t\d\v\w\3\e\1\x\1\e\u\k\2\a\k\1\r\q\7\8\w\p\5\3\d\e\2\i\k\b\n\o\p\z\2\3\q\m\e\9\d\8\0\t\0\h\0\r\0\0\f\u\4\j\b\j\5\s\0\4\c\g\g\q\a\i\j\j\0\2\9\t\9\c\e\8\m\5\n\k\p\s\o\y\s\m\i\r\1\9\l\5\0\4\x\t\8\v\9\3\d\v\r\w\k\f\x\i\g\d\m\s\a\4\f\q\6\8\0\t\5\z\o\p\q\f\a\8\8\i\o\b\r\c\3\x\d\1\p\z\y\6\4\l\m\n\2\0\3\k\e\v\n\d\1\5\l\g\e\g\p\r\r\v\l\w\9\k\e\4\q\p\l\k\7\o\0\m\b\v\l\6\o\m\j\p\3\g\3\f\8\m\m\a\f\o\r\u\v\t\b\2\r\9\c\3\c\0\3\c\1\5\i\1\o\m\q\e\t\5\c\7\r\m\y\p\i\2\s\4\v\i\l\s\h\u\9\8\l\n\2\b\4\i\5\3\7\x\p\8\h\m\g\2\w\0\3\7\s\o\2\a\k\e\x\4\d\6\5\f\2\6\8\w\i\j\z\k\v\7\3\n\k\z\g\0\q\2\s\8\9\1\4\k\p\e\u\7\g\k\g\g\w\0\1\e\4\8\t\4\q\n\7\l\f\c\g\f\q\h\5\9\0\h\3\k\i\u\s\9\r\9\h\r\y\7\8\v\n\r\3\u\i\v\z\6\e\3\a\p\b\4\s\b\n\l\j\2\3\g\u\o\g\z\2\u\o\h\3\4\a\w\s\0\f\i\g\a\2\z\s\g\2\z\x\1\m\3\p\g\o\0\6\t\1\9\6\4\8\o\i\a\c\x\5\y\5\8\b\o\b\h\o\r\k\8\i\i\n\n\3\n\d\1\0\f\1\y\l\t\d\v\h\c\7\n\7\c\h\p\9\h\n\8\k\3\7\t\b\w\5\g\2\l\g\7\m\8\r\m\4\f\i\4\9\a\2\f\l\z\n\k\a\i\0\y\b\g\v\9\n\c\o\j\q\8\w\l\w\m\j\r\3\h\1\z\j\o\k\6\s\e\y\j\z\l\j\h\6\0\l\t\o\x\s\i\t\s\w\z\s\j\o\y\7\r\q\d\n\q\p\y\6\s\c\f\p\5\c\j\c\y\6\4\9\k\2\1\o\k\r\z\e\0\r\t\d\9\n\a\1\h\x\9\z\e\f\r\s\7\n\7\0\j\1\s\3\5\a\q\f\9\h\y\8\w\2\r\n\b\1\j\o\w\q\n\n\2\0\7\v\s\y\k\u\m\3\w\q\r\j\d\d\y\o\e\c\1\x\f\f\k\3\s\j\f\m\7\6\0\p\3\v ]] 00:11:31.859 13:16:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:32.118 13:16:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:11:32.118 13:16:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:11:32.118 13:16:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:32.118 13:16:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:32.118 [2024-09-27 13:16:33.918751] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:32.118 [2024-09-27 13:16:33.918886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61149 ] 00:11:32.118 { 00:11:32.118 "subsystems": [ 00:11:32.118 { 00:11:32.118 "subsystem": "bdev", 00:11:32.118 "config": [ 00:11:32.118 { 00:11:32.118 "params": { 00:11:32.118 "block_size": 512, 00:11:32.118 "num_blocks": 1048576, 00:11:32.118 "name": "malloc0" 00:11:32.118 }, 00:11:32.118 "method": "bdev_malloc_create" 00:11:32.118 }, 00:11:32.118 { 00:11:32.118 "params": { 00:11:32.118 "filename": "/dev/zram1", 00:11:32.118 "name": "uring0" 00:11:32.118 }, 00:11:32.118 "method": "bdev_uring_create" 00:11:32.118 }, 00:11:32.118 { 00:11:32.118 "method": "bdev_wait_for_examine" 00:11:32.118 } 00:11:32.118 ] 00:11:32.118 } 00:11:32.118 ] 00:11:32.118 } 00:11:32.377 [2024-09-27 13:16:34.056721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.377 [2024-09-27 13:16:34.116817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.377 [2024-09-27 13:16:34.148069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:36.481  Copying: 140/512 [MB] (140 MBps) Copying: 279/512 [MB] (139 MBps) Copying: 420/512 [MB] (140 MBps) Copying: 512/512 [MB] (average 139 MBps) 00:11:36.481 00:11:36.481 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:11:36.481 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:11:36.481 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:36.481 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:36.481 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:11:36.481 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:11:36.481 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:36.481 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:36.481 [2024-09-27 13:16:38.253434] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:36.481 [2024-09-27 13:16:38.253528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61205 ] 00:11:36.481 { 00:11:36.481 "subsystems": [ 00:11:36.481 { 00:11:36.481 "subsystem": "bdev", 00:11:36.481 "config": [ 00:11:36.481 { 00:11:36.481 "params": { 00:11:36.481 "block_size": 512, 00:11:36.481 "num_blocks": 1048576, 00:11:36.481 "name": "malloc0" 00:11:36.481 }, 00:11:36.481 "method": "bdev_malloc_create" 00:11:36.481 }, 00:11:36.481 { 00:11:36.481 "params": { 00:11:36.481 "filename": "/dev/zram1", 00:11:36.481 "name": "uring0" 00:11:36.481 }, 00:11:36.481 "method": "bdev_uring_create" 00:11:36.481 }, 00:11:36.481 { 00:11:36.481 "params": { 00:11:36.481 "name": "uring0" 00:11:36.481 }, 00:11:36.481 "method": "bdev_uring_delete" 00:11:36.481 }, 00:11:36.481 { 00:11:36.481 "method": "bdev_wait_for_examine" 00:11:36.481 } 00:11:36.481 ] 00:11:36.481 } 00:11:36.481 ] 00:11:36.481 } 00:11:36.740 [2024-09-27 13:16:38.391354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.740 [2024-09-27 13:16:38.451838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.740 [2024-09-27 13:16:38.482473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:37.258  Copying: 0/0 [B] (average 0 Bps) 00:11:37.258 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:37.258 13:16:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:37.258 [2024-09-27 13:16:38.953559] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:37.258 [2024-09-27 13:16:38.954237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61234 ] 00:11:37.258 { 00:11:37.258 "subsystems": [ 00:11:37.258 { 00:11:37.258 "subsystem": "bdev", 00:11:37.258 "config": [ 00:11:37.258 { 00:11:37.258 "params": { 00:11:37.258 "block_size": 512, 00:11:37.258 "num_blocks": 1048576, 00:11:37.258 "name": "malloc0" 00:11:37.258 }, 00:11:37.258 "method": "bdev_malloc_create" 00:11:37.258 }, 00:11:37.258 { 00:11:37.258 "params": { 00:11:37.258 "filename": "/dev/zram1", 00:11:37.258 "name": "uring0" 00:11:37.258 }, 00:11:37.258 "method": "bdev_uring_create" 00:11:37.258 }, 00:11:37.258 { 00:11:37.258 "params": { 00:11:37.258 "name": "uring0" 00:11:37.258 }, 00:11:37.258 "method": "bdev_uring_delete" 00:11:37.258 }, 00:11:37.258 { 00:11:37.258 "method": "bdev_wait_for_examine" 00:11:37.258 } 00:11:37.258 ] 00:11:37.258 } 00:11:37.258 ] 00:11:37.258 } 00:11:37.258 [2024-09-27 13:16:39.095899] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.517 [2024-09-27 13:16:39.170237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.517 [2024-09-27 13:16:39.205410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:37.517 [2024-09-27 13:16:39.352394] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:11:37.517 [2024-09-27 13:16:39.352461] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:11:37.517 [2024-09-27 13:16:39.352479] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:11:37.517 [2024-09-27 13:16:39.352500] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:37.776 [2024-09-27 13:16:39.521523] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:37.776 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:11:37.776 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:37.776 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:11:37.776 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:11:37.776 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:11:37.776 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:37.776 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:11:37.776 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:11:37.776 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:11:37.776 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:11:37.776 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:11:38.035 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:38.295 00:11:38.295 real 0m14.405s 00:11:38.295 user 0m9.934s 00:11:38.295 sys 0m12.131s 00:11:38.295 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.295 13:16:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:38.295 ************************************ 00:11:38.295 END TEST dd_uring_copy 00:11:38.295 ************************************ 00:11:38.295 00:11:38.295 real 0m14.656s 00:11:38.295 user 0m10.086s 00:11:38.295 sys 0m12.233s 00:11:38.295 13:16:39 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.295 13:16:39 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:38.295 ************************************ 00:11:38.295 END TEST spdk_dd_uring 00:11:38.295 ************************************ 00:11:38.295 13:16:39 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:38.295 13:16:39 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:38.295 13:16:39 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.295 13:16:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:38.295 ************************************ 00:11:38.295 START TEST spdk_dd_sparse 00:11:38.295 ************************************ 00:11:38.295 13:16:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:38.295 * Looking for test storage... 00:11:38.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:38.295 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:38.295 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:11:38.295 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:38.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.555 --rc genhtml_branch_coverage=1 00:11:38.555 --rc genhtml_function_coverage=1 00:11:38.555 --rc genhtml_legend=1 00:11:38.555 --rc geninfo_all_blocks=1 00:11:38.555 --rc geninfo_unexecuted_blocks=1 00:11:38.555 00:11:38.555 ' 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:38.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.555 --rc genhtml_branch_coverage=1 00:11:38.555 --rc genhtml_function_coverage=1 00:11:38.555 --rc genhtml_legend=1 00:11:38.555 --rc geninfo_all_blocks=1 00:11:38.555 --rc geninfo_unexecuted_blocks=1 00:11:38.555 00:11:38.555 ' 00:11:38.555 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:38.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.556 --rc genhtml_branch_coverage=1 00:11:38.556 --rc genhtml_function_coverage=1 00:11:38.556 --rc genhtml_legend=1 00:11:38.556 --rc geninfo_all_blocks=1 00:11:38.556 --rc geninfo_unexecuted_blocks=1 00:11:38.556 00:11:38.556 ' 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:38.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.556 --rc genhtml_branch_coverage=1 00:11:38.556 --rc genhtml_function_coverage=1 00:11:38.556 --rc genhtml_legend=1 00:11:38.556 --rc geninfo_all_blocks=1 00:11:38.556 --rc geninfo_unexecuted_blocks=1 00:11:38.556 00:11:38.556 ' 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:11:38.556 1+0 records in 00:11:38.556 1+0 records out 00:11:38.556 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00538777 s, 778 MB/s 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:11:38.556 1+0 records in 00:11:38.556 1+0 records out 00:11:38.556 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00661237 s, 634 MB/s 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:11:38.556 1+0 records in 00:11:38.556 1+0 records out 00:11:38.556 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00581475 s, 721 MB/s 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:38.556 ************************************ 00:11:38.556 START TEST dd_sparse_file_to_file 00:11:38.556 ************************************ 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:38.556 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:38.556 [2024-09-27 13:16:40.286833] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:38.556 [2024-09-27 13:16:40.287221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61328 ] 00:11:38.556 { 00:11:38.556 "subsystems": [ 00:11:38.556 { 00:11:38.556 "subsystem": "bdev", 00:11:38.556 "config": [ 00:11:38.556 { 00:11:38.556 "params": { 00:11:38.556 "block_size": 4096, 00:11:38.556 "filename": "dd_sparse_aio_disk", 00:11:38.556 "name": "dd_aio" 00:11:38.556 }, 00:11:38.556 "method": "bdev_aio_create" 00:11:38.556 }, 00:11:38.556 { 00:11:38.556 "params": { 00:11:38.556 "lvs_name": "dd_lvstore", 00:11:38.556 "bdev_name": "dd_aio" 00:11:38.556 }, 00:11:38.556 "method": "bdev_lvol_create_lvstore" 00:11:38.556 }, 00:11:38.556 { 00:11:38.556 "method": "bdev_wait_for_examine" 00:11:38.556 } 00:11:38.556 ] 00:11:38.556 } 00:11:38.556 ] 00:11:38.556 } 00:11:38.815 [2024-09-27 13:16:40.426020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.815 [2024-09-27 13:16:40.498741] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.815 [2024-09-27 13:16:40.534356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:39.074  Copying: 12/36 [MB] (average 1000 MBps) 00:11:39.074 00:11:39.074 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:11:39.074 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:11:39.074 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:11:39.074 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:11:39.074 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:39.074 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:11:39.074 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:11:39.074 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:11:39.074 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:11:39.074 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:39.074 00:11:39.074 real 0m0.613s 00:11:39.074 user 0m0.377s 00:11:39.074 sys 0m0.283s 00:11:39.074 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.074 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:39.074 ************************************ 00:11:39.074 END TEST dd_sparse_file_to_file 00:11:39.074 ************************************ 00:11:39.074 13:16:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:11:39.075 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:39.075 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.075 13:16:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:39.075 ************************************ 00:11:39.075 START TEST dd_sparse_file_to_bdev 00:11:39.075 ************************************ 00:11:39.075 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:11:39.075 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:39.075 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:11:39.075 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:11:39.075 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:11:39.075 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:11:39.075 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:11:39.075 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:39.075 13:16:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:39.334 [2024-09-27 13:16:40.955740] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:39.334 [2024-09-27 13:16:40.955834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61371 ] 00:11:39.334 { 00:11:39.334 "subsystems": [ 00:11:39.334 { 00:11:39.334 "subsystem": "bdev", 00:11:39.334 "config": [ 00:11:39.334 { 00:11:39.334 "params": { 00:11:39.334 "block_size": 4096, 00:11:39.334 "filename": "dd_sparse_aio_disk", 00:11:39.334 "name": "dd_aio" 00:11:39.334 }, 00:11:39.334 "method": "bdev_aio_create" 00:11:39.334 }, 00:11:39.334 { 00:11:39.334 "params": { 00:11:39.334 "lvs_name": "dd_lvstore", 00:11:39.334 "lvol_name": "dd_lvol", 00:11:39.334 "size_in_mib": 36, 00:11:39.334 "thin_provision": true 00:11:39.334 }, 00:11:39.334 "method": "bdev_lvol_create" 00:11:39.334 }, 00:11:39.334 { 00:11:39.334 "method": "bdev_wait_for_examine" 00:11:39.334 } 00:11:39.334 ] 00:11:39.334 } 00:11:39.334 ] 00:11:39.334 } 00:11:39.334 [2024-09-27 13:16:41.094026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.335 [2024-09-27 13:16:41.169446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.594 [2024-09-27 13:16:41.205201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:39.852  Copying: 12/36 [MB] (average 545 MBps) 00:11:39.852 00:11:39.852 ************************************ 00:11:39.852 END TEST dd_sparse_file_to_bdev 00:11:39.852 ************************************ 00:11:39.852 00:11:39.853 real 0m0.582s 00:11:39.853 user 0m0.384s 00:11:39.853 sys 0m0.266s 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:39.853 ************************************ 00:11:39.853 START TEST dd_sparse_bdev_to_file 00:11:39.853 ************************************ 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:39.853 13:16:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:39.853 [2024-09-27 13:16:41.582798] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:39.853 [2024-09-27 13:16:41.583031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61403 ] 00:11:39.853 { 00:11:39.853 "subsystems": [ 00:11:39.853 { 00:11:39.853 "subsystem": "bdev", 00:11:39.853 "config": [ 00:11:39.853 { 00:11:39.853 "params": { 00:11:39.853 "block_size": 4096, 00:11:39.853 "filename": "dd_sparse_aio_disk", 00:11:39.853 "name": "dd_aio" 00:11:39.853 }, 00:11:39.853 "method": "bdev_aio_create" 00:11:39.853 }, 00:11:39.853 { 00:11:39.853 "method": "bdev_wait_for_examine" 00:11:39.853 } 00:11:39.853 ] 00:11:39.853 } 00:11:39.853 ] 00:11:39.853 } 00:11:40.111 [2024-09-27 13:16:41.719338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.111 [2024-09-27 13:16:41.780350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.111 [2024-09-27 13:16:41.819477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:40.370  Copying: 12/36 [MB] (average 1090 MBps) 00:11:40.370 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:11:40.371 ************************************ 00:11:40.371 END TEST dd_sparse_bdev_to_file 00:11:40.371 ************************************ 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:40.371 00:11:40.371 real 0m0.562s 00:11:40.371 user 0m0.350s 00:11:40.371 sys 0m0.260s 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:11:40.371 ************************************ 00:11:40.371 END TEST spdk_dd_sparse 00:11:40.371 ************************************ 00:11:40.371 00:11:40.371 real 0m2.161s 00:11:40.371 user 0m1.277s 00:11:40.371 sys 0m1.040s 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.371 13:16:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:40.371 13:16:42 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:40.371 13:16:42 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:40.371 13:16:42 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.371 13:16:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:40.371 ************************************ 00:11:40.371 START TEST spdk_dd_negative 00:11:40.371 ************************************ 00:11:40.371 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:40.630 * Looking for test storage... 00:11:40.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:40.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.630 --rc genhtml_branch_coverage=1 00:11:40.630 --rc genhtml_function_coverage=1 00:11:40.630 --rc genhtml_legend=1 00:11:40.630 --rc geninfo_all_blocks=1 00:11:40.630 --rc geninfo_unexecuted_blocks=1 00:11:40.630 00:11:40.630 ' 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:40.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.630 --rc genhtml_branch_coverage=1 00:11:40.630 --rc genhtml_function_coverage=1 00:11:40.630 --rc genhtml_legend=1 00:11:40.630 --rc geninfo_all_blocks=1 00:11:40.630 --rc geninfo_unexecuted_blocks=1 00:11:40.630 00:11:40.630 ' 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:40.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.630 --rc genhtml_branch_coverage=1 00:11:40.630 --rc genhtml_function_coverage=1 00:11:40.630 --rc genhtml_legend=1 00:11:40.630 --rc geninfo_all_blocks=1 00:11:40.630 --rc geninfo_unexecuted_blocks=1 00:11:40.630 00:11:40.630 ' 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:40.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.630 --rc genhtml_branch_coverage=1 00:11:40.630 --rc genhtml_function_coverage=1 00:11:40.630 --rc genhtml_legend=1 00:11:40.630 --rc geninfo_all_blocks=1 00:11:40.630 --rc geninfo_unexecuted_blocks=1 00:11:40.630 00:11:40.630 ' 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:40.630 ************************************ 00:11:40.630 START TEST dd_invalid_arguments 00:11:40.630 ************************************ 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:40.630 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:40.630 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:11:40.630 00:11:40.630 CPU options: 00:11:40.630 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:11:40.630 (like [0,1,10]) 00:11:40.630 --lcores lcore to CPU mapping list. The list is in the format: 00:11:40.630 [<,lcores[@CPUs]>...] 00:11:40.631 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:11:40.631 Within the group, '-' is used for range separator, 00:11:40.631 ',' is used for single number separator. 00:11:40.631 '( )' can be omitted for single element group, 00:11:40.631 '@' can be omitted if cpus and lcores have the same value 00:11:40.631 --disable-cpumask-locks Disable CPU core lock files. 00:11:40.631 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:11:40.631 pollers in the app support interrupt mode) 00:11:40.631 -p, --main-core main (primary) core for DPDK 00:11:40.631 00:11:40.631 Configuration options: 00:11:40.631 -c, --config, --json JSON config file 00:11:40.631 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:11:40.631 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:11:40.631 --wait-for-rpc wait for RPCs to initialize subsystems 00:11:40.631 --rpcs-allowed comma-separated list of permitted RPCS 00:11:40.631 --json-ignore-init-errors don't exit on invalid config entry 00:11:40.631 00:11:40.631 Memory options: 00:11:40.631 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:11:40.631 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:11:40.631 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:11:40.631 -R, --huge-unlink unlink huge files after initialization 00:11:40.631 -n, --mem-channels number of memory channels used for DPDK 00:11:40.631 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:11:40.631 --msg-mempool-size global message memory pool size in count (default: 262143) 00:11:40.631 --no-huge run without using hugepages 00:11:40.631 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:11:40.631 -i, --shm-id shared memory ID (optional) 00:11:40.631 -g, --single-file-segments force creating just one hugetlbfs file 00:11:40.631 00:11:40.631 PCI options: 00:11:40.631 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:11:40.631 -B, --pci-blocked pci addr to block (can be used more than once) 00:11:40.631 -u, --no-pci disable PCI access 00:11:40.631 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:11:40.631 00:11:40.631 Log options: 00:11:40.631 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:11:40.631 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:11:40.631 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:11:40.631 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:11:40.631 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:11:40.631 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:11:40.631 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:11:40.631 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:11:40.631 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:11:40.631 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:11:40.631 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:11:40.631 --silence-noticelog disable notice level logging to stderr 00:11:40.631 00:11:40.631 Trace options: 00:11:40.631 --num-trace-entries number of trace entries for each core, must be power of 2, 00:11:40.631 setting 0 to disable trace (default 32768) 00:11:40.631 Tracepoints vary in size and can use more than one trace entry. 00:11:40.631 -e, --tpoint-group [:] 00:11:40.631 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:11:40.631 [2024-09-27 13:16:42.466000] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:11:40.890 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:11:40.890 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:11:40.890 bdev_raid, all). 00:11:40.890 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:11:40.890 a tracepoint group. First tpoint inside a group can be enabled by 00:11:40.890 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:11:40.890 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:11:40.891 in /include/spdk_internal/trace_defs.h 00:11:40.891 00:11:40.891 Other options: 00:11:40.891 -h, --help show this usage 00:11:40.891 -v, --version print SPDK version 00:11:40.891 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:11:40.891 --env-context Opaque context for use of the env implementation 00:11:40.891 00:11:40.891 Application specific: 00:11:40.891 [--------- DD Options ---------] 00:11:40.891 --if Input file. Must specify either --if or --ib. 00:11:40.891 --ib Input bdev. Must specifier either --if or --ib 00:11:40.891 --of Output file. Must specify either --of or --ob. 00:11:40.891 --ob Output bdev. Must specify either --of or --ob. 00:11:40.891 --iflag Input file flags. 00:11:40.891 --oflag Output file flags. 00:11:40.891 --bs I/O unit size (default: 4096) 00:11:40.891 --qd Queue depth (default: 2) 00:11:40.891 --count I/O unit count. The number of I/O units to copy. (default: all) 00:11:40.891 --skip Skip this many I/O units at start of input. (default: 0) 00:11:40.891 --seek Skip this many I/O units at start of output. (default: 0) 00:11:40.891 --aio Force usage of AIO. (by default io_uring is used if available) 00:11:40.891 --sparse Enable hole skipping in input target 00:11:40.891 Available iflag and oflag values: 00:11:40.891 append - append mode 00:11:40.891 direct - use direct I/O for data 00:11:40.891 directory - fail unless a directory 00:11:40.891 dsync - use synchronized I/O for data 00:11:40.891 noatime - do not update access time 00:11:40.891 noctty - do not assign controlling terminal from file 00:11:40.891 nofollow - do not follow symlinks 00:11:40.891 nonblock - use non-blocking I/O 00:11:40.891 sync - use synchronized I/O for data and metadata 00:11:40.891 ************************************ 00:11:40.891 END TEST dd_invalid_arguments 00:11:40.891 ************************************ 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:40.891 00:11:40.891 real 0m0.077s 00:11:40.891 user 0m0.049s 00:11:40.891 sys 0m0.026s 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:40.891 ************************************ 00:11:40.891 START TEST dd_double_input 00:11:40.891 ************************************ 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:40.891 [2024-09-27 13:16:42.597713] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:40.891 00:11:40.891 real 0m0.082s 00:11:40.891 user 0m0.047s 00:11:40.891 sys 0m0.032s 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:11:40.891 ************************************ 00:11:40.891 END TEST dd_double_input 00:11:40.891 ************************************ 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:40.891 ************************************ 00:11:40.891 START TEST dd_double_output 00:11:40.891 ************************************ 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:40.891 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.892 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:40.892 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.892 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:40.892 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:40.892 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:40.892 [2024-09-27 13:16:42.721957] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.151 00:11:41.151 real 0m0.070s 00:11:41.151 user 0m0.040s 00:11:41.151 sys 0m0.028s 00:11:41.151 ************************************ 00:11:41.151 END TEST dd_double_output 00:11:41.151 ************************************ 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:41.151 ************************************ 00:11:41.151 START TEST dd_no_input 00:11:41.151 ************************************ 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:41.151 [2024-09-27 13:16:42.859182] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.151 00:11:41.151 real 0m0.082s 00:11:41.151 user 0m0.058s 00:11:41.151 sys 0m0.022s 00:11:41.151 ************************************ 00:11:41.151 END TEST dd_no_input 00:11:41.151 ************************************ 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:41.151 ************************************ 00:11:41.151 START TEST dd_no_output 00:11:41.151 ************************************ 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:41.151 13:16:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:41.411 [2024-09-27 13:16:43.003779] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.411 ************************************ 00:11:41.411 END TEST dd_no_output 00:11:41.411 ************************************ 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.411 00:11:41.411 real 0m0.088s 00:11:41.411 user 0m0.052s 00:11:41.411 sys 0m0.034s 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:41.411 ************************************ 00:11:41.411 START TEST dd_wrong_blocksize 00:11:41.411 ************************************ 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.411 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:41.412 [2024-09-27 13:16:43.133921] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.412 ************************************ 00:11:41.412 END TEST dd_wrong_blocksize 00:11:41.412 ************************************ 00:11:41.412 00:11:41.412 real 0m0.073s 00:11:41.412 user 0m0.041s 00:11:41.412 sys 0m0.030s 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:41.412 ************************************ 00:11:41.412 START TEST dd_smaller_blocksize 00:11:41.412 ************************************ 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:41.412 13:16:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:41.671 [2024-09-27 13:16:43.266096] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:41.671 [2024-09-27 13:16:43.266206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61635 ] 00:11:41.671 [2024-09-27 13:16:43.407157] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.671 [2024-09-27 13:16:43.479161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.671 [2024-09-27 13:16:43.512797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:42.238 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:42.238 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:42.238 [2024-09-27 13:16:44.045010] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:11:42.238 [2024-09-27 13:16:44.045072] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:42.496 [2024-09-27 13:16:44.113592] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:42.496 00:11:42.496 real 0m1.003s 00:11:42.496 user 0m0.385s 00:11:42.496 sys 0m0.508s 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.496 ************************************ 00:11:42.496 END TEST dd_smaller_blocksize 00:11:42.496 ************************************ 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:42.496 ************************************ 00:11:42.496 START TEST dd_invalid_count 00:11:42.496 ************************************ 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:42.496 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:42.497 [2024-09-27 13:16:44.319358] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:11:42.497 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:11:42.497 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:42.497 ************************************ 00:11:42.497 END TEST dd_invalid_count 00:11:42.497 ************************************ 00:11:42.497 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:42.497 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:42.497 00:11:42.497 real 0m0.078s 00:11:42.497 user 0m0.042s 00:11:42.497 sys 0m0.034s 00:11:42.497 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.497 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:42.755 ************************************ 00:11:42.755 START TEST dd_invalid_oflag 00:11:42.755 ************************************ 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:42.755 [2024-09-27 13:16:44.447359] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:42.755 00:11:42.755 real 0m0.078s 00:11:42.755 user 0m0.049s 00:11:42.755 sys 0m0.028s 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.755 ************************************ 00:11:42.755 END TEST dd_invalid_oflag 00:11:42.755 ************************************ 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:42.755 ************************************ 00:11:42.755 START TEST dd_invalid_iflag 00:11:42.755 ************************************ 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:42.755 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:42.755 [2024-09-27 13:16:44.580872] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:43.014 00:11:43.014 real 0m0.090s 00:11:43.014 user 0m0.058s 00:11:43.014 sys 0m0.030s 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.014 ************************************ 00:11:43.014 END TEST dd_invalid_iflag 00:11:43.014 ************************************ 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:43.014 ************************************ 00:11:43.014 START TEST dd_unknown_flag 00:11:43.014 ************************************ 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:43.014 13:16:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:43.014 [2024-09-27 13:16:44.718923] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:43.014 [2024-09-27 13:16:44.719015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61727 ] 00:11:43.014 [2024-09-27 13:16:44.855973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.274 [2024-09-27 13:16:44.929581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.274 [2024-09-27 13:16:44.965168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:43.274 [2024-09-27 13:16:44.989780] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:11:43.274 [2024-09-27 13:16:44.989848] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:43.274 [2024-09-27 13:16:44.989918] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:11:43.274 [2024-09-27 13:16:44.989933] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:43.274 [2024-09-27 13:16:44.990222] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:11:43.274 [2024-09-27 13:16:44.990242] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:43.274 [2024-09-27 13:16:44.990300] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:11:43.274 [2024-09-27 13:16:44.990314] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:11:43.274 [2024-09-27 13:16:45.058131] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:11:43.533 ************************************ 00:11:43.533 END TEST dd_unknown_flag 00:11:43.533 ************************************ 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:43.533 00:11:43.533 real 0m0.490s 00:11:43.533 user 0m0.279s 00:11:43.533 sys 0m0.115s 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:43.533 ************************************ 00:11:43.533 START TEST dd_invalid_json 00:11:43.533 ************************************ 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.533 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.534 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.534 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:43.534 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:43.534 [2024-09-27 13:16:45.268658] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:43.534 [2024-09-27 13:16:45.268779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61760 ] 00:11:43.793 [2024-09-27 13:16:45.402433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.793 [2024-09-27 13:16:45.466086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.793 [2024-09-27 13:16:45.466165] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:11:43.793 [2024-09-27 13:16:45.466179] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:43.793 [2024-09-27 13:16:45.466187] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:43.793 [2024-09-27 13:16:45.466254] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:11:43.793 ************************************ 00:11:43.793 END TEST dd_invalid_json 00:11:43.793 ************************************ 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:43.793 00:11:43.793 real 0m0.349s 00:11:43.793 user 0m0.186s 00:11:43.793 sys 0m0.061s 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:43.793 ************************************ 00:11:43.793 START TEST dd_invalid_seek 00:11:43.793 ************************************ 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.793 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.794 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:43.794 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:43.794 13:16:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:44.053 { 00:11:44.053 "subsystems": [ 00:11:44.053 { 00:11:44.053 "subsystem": "bdev", 00:11:44.053 "config": [ 00:11:44.053 { 00:11:44.053 "params": { 00:11:44.053 "block_size": 512, 00:11:44.053 "num_blocks": 512, 00:11:44.053 "name": "malloc0" 00:11:44.053 }, 00:11:44.053 "method": "bdev_malloc_create" 00:11:44.053 }, 00:11:44.053 { 00:11:44.053 "params": { 00:11:44.053 "block_size": 512, 00:11:44.053 "num_blocks": 512, 00:11:44.053 "name": "malloc1" 00:11:44.053 }, 00:11:44.053 "method": "bdev_malloc_create" 00:11:44.053 }, 00:11:44.053 { 00:11:44.053 "method": "bdev_wait_for_examine" 00:11:44.053 } 00:11:44.053 ] 00:11:44.053 } 00:11:44.053 ] 00:11:44.053 } 00:11:44.053 [2024-09-27 13:16:45.658151] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:44.053 [2024-09-27 13:16:45.658401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61785 ] 00:11:44.053 [2024-09-27 13:16:45.797017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.053 [2024-09-27 13:16:45.868857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.311 [2024-09-27 13:16:45.903892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:44.311 [2024-09-27 13:16:45.952385] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:11:44.311 [2024-09-27 13:16:45.952459] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:44.311 [2024-09-27 13:16:46.025350] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:44.311 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:11:44.311 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:44.311 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:11:44.311 ************************************ 00:11:44.311 END TEST dd_invalid_seek 00:11:44.311 ************************************ 00:11:44.311 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:11:44.311 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:11:44.311 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:44.311 00:11:44.311 real 0m0.520s 00:11:44.311 user 0m0.347s 00:11:44.311 sys 0m0.130s 00:11:44.311 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.311 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:11:44.311 13:16:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:44.571 ************************************ 00:11:44.571 START TEST dd_invalid_skip 00:11:44.571 ************************************ 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:44.571 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:44.571 [2024-09-27 13:16:46.231746] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:44.571 [2024-09-27 13:16:46.232019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61819 ] 00:11:44.571 { 00:11:44.571 "subsystems": [ 00:11:44.571 { 00:11:44.571 "subsystem": "bdev", 00:11:44.571 "config": [ 00:11:44.571 { 00:11:44.571 "params": { 00:11:44.571 "block_size": 512, 00:11:44.571 "num_blocks": 512, 00:11:44.571 "name": "malloc0" 00:11:44.571 }, 00:11:44.571 "method": "bdev_malloc_create" 00:11:44.571 }, 00:11:44.571 { 00:11:44.571 "params": { 00:11:44.571 "block_size": 512, 00:11:44.571 "num_blocks": 512, 00:11:44.571 "name": "malloc1" 00:11:44.571 }, 00:11:44.571 "method": "bdev_malloc_create" 00:11:44.571 }, 00:11:44.571 { 00:11:44.571 "method": "bdev_wait_for_examine" 00:11:44.571 } 00:11:44.571 ] 00:11:44.571 } 00:11:44.571 ] 00:11:44.571 } 00:11:44.571 [2024-09-27 13:16:46.367946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.830 [2024-09-27 13:16:46.426600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.830 [2024-09-27 13:16:46.458493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:44.830 [2024-09-27 13:16:46.504238] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:11:44.830 [2024-09-27 13:16:46.504301] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:44.830 [2024-09-27 13:16:46.574305] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:44.830 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:11:44.830 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:44.830 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:11:44.830 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:11:44.830 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:11:44.830 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:44.830 00:11:44.830 real 0m0.496s 00:11:44.830 user 0m0.335s 00:11:44.830 sys 0m0.121s 00:11:44.830 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.830 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:11:44.830 ************************************ 00:11:44.830 END TEST dd_invalid_skip 00:11:44.830 ************************************ 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:45.089 ************************************ 00:11:45.089 START TEST dd_invalid_input_count 00:11:45.089 ************************************ 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:45.089 13:16:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:45.089 { 00:11:45.089 "subsystems": [ 00:11:45.089 { 00:11:45.089 "subsystem": "bdev", 00:11:45.089 "config": [ 00:11:45.089 { 00:11:45.089 "params": { 00:11:45.089 "block_size": 512, 00:11:45.089 "num_blocks": 512, 00:11:45.089 "name": "malloc0" 00:11:45.089 }, 00:11:45.089 "method": "bdev_malloc_create" 00:11:45.089 }, 00:11:45.089 { 00:11:45.089 "params": { 00:11:45.089 "block_size": 512, 00:11:45.089 "num_blocks": 512, 00:11:45.089 "name": "malloc1" 00:11:45.089 }, 00:11:45.089 "method": "bdev_malloc_create" 00:11:45.089 }, 00:11:45.089 { 00:11:45.089 "method": "bdev_wait_for_examine" 00:11:45.089 } 00:11:45.089 ] 00:11:45.089 } 00:11:45.089 ] 00:11:45.089 } 00:11:45.089 [2024-09-27 13:16:46.791601] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:45.089 [2024-09-27 13:16:46.791711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61857 ] 00:11:45.089 [2024-09-27 13:16:46.928672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.348 [2024-09-27 13:16:47.000562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.348 [2024-09-27 13:16:47.034987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:45.348 [2024-09-27 13:16:47.083314] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:11:45.348 [2024-09-27 13:16:47.083384] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:45.348 [2024-09-27 13:16:47.154959] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:45.608 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:11:45.608 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:45.608 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:11:45.608 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:11:45.608 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:11:45.608 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:45.608 ************************************ 00:11:45.608 END TEST dd_invalid_input_count 00:11:45.608 ************************************ 00:11:45.608 00:11:45.608 real 0m0.529s 00:11:45.608 user 0m0.359s 00:11:45.608 sys 0m0.121s 00:11:45.608 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.608 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:11:45.608 13:16:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:11:45.608 13:16:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:45.608 13:16:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.608 13:16:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:45.608 ************************************ 00:11:45.608 START TEST dd_invalid_output_count 00:11:45.608 ************************************ 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:45.609 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:45.609 { 00:11:45.609 "subsystems": [ 00:11:45.609 { 00:11:45.609 "subsystem": "bdev", 00:11:45.609 "config": [ 00:11:45.609 { 00:11:45.609 "params": { 00:11:45.609 "block_size": 512, 00:11:45.609 "num_blocks": 512, 00:11:45.609 "name": "malloc0" 00:11:45.609 }, 00:11:45.609 "method": "bdev_malloc_create" 00:11:45.609 }, 00:11:45.609 { 00:11:45.609 "method": "bdev_wait_for_examine" 00:11:45.609 } 00:11:45.609 ] 00:11:45.609 } 00:11:45.609 ] 00:11:45.609 } 00:11:45.609 [2024-09-27 13:16:47.362748] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:45.609 [2024-09-27 13:16:47.362865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61892 ] 00:11:45.868 [2024-09-27 13:16:47.495743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.868 [2024-09-27 13:16:47.578362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.868 [2024-09-27 13:16:47.609252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:45.868 [2024-09-27 13:16:47.646700] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:11:45.868 [2024-09-27 13:16:47.646786] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:45.868 [2024-09-27 13:16:47.712200] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:46.127 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:11:46.127 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:46.127 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:11:46.127 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:11:46.127 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:11:46.127 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:46.128 00:11:46.128 real 0m0.502s 00:11:46.128 user 0m0.344s 00:11:46.128 sys 0m0.106s 00:11:46.128 ************************************ 00:11:46.128 END TEST dd_invalid_output_count 00:11:46.128 ************************************ 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:46.128 ************************************ 00:11:46.128 START TEST dd_bs_not_multiple 00:11:46.128 ************************************ 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:46.128 13:16:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:46.128 [2024-09-27 13:16:47.921944] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:46.128 { 00:11:46.128 "subsystems": [ 00:11:46.128 { 00:11:46.128 "subsystem": "bdev", 00:11:46.128 "config": [ 00:11:46.128 { 00:11:46.128 "params": { 00:11:46.128 "block_size": 512, 00:11:46.128 "num_blocks": 512, 00:11:46.128 "name": "malloc0" 00:11:46.128 }, 00:11:46.128 "method": "bdev_malloc_create" 00:11:46.128 }, 00:11:46.128 { 00:11:46.128 "params": { 00:11:46.128 "block_size": 512, 00:11:46.128 "num_blocks": 512, 00:11:46.128 "name": "malloc1" 00:11:46.128 }, 00:11:46.128 "method": "bdev_malloc_create" 00:11:46.128 }, 00:11:46.128 { 00:11:46.128 "method": "bdev_wait_for_examine" 00:11:46.128 } 00:11:46.128 ] 00:11:46.128 } 00:11:46.128 ] 00:11:46.128 } 00:11:46.128 [2024-09-27 13:16:47.922048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61923 ] 00:11:46.387 [2024-09-27 13:16:48.060504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.387 [2024-09-27 13:16:48.119700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.387 [2024-09-27 13:16:48.149832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:46.387 [2024-09-27 13:16:48.196152] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:11:46.387 [2024-09-27 13:16:48.196214] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:46.659 [2024-09-27 13:16:48.264811] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:46.659 13:16:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:11:46.659 13:16:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:46.659 13:16:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:11:46.659 13:16:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:11:46.659 13:16:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:11:46.659 13:16:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:46.659 00:11:46.659 real 0m0.522s 00:11:46.659 user 0m0.357s 00:11:46.659 sys 0m0.124s 00:11:46.659 13:16:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.659 ************************************ 00:11:46.659 END TEST dd_bs_not_multiple 00:11:46.659 ************************************ 00:11:46.659 13:16:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:11:46.659 ************************************ 00:11:46.659 END TEST spdk_dd_negative 00:11:46.659 ************************************ 00:11:46.659 00:11:46.659 real 0m6.221s 00:11:46.659 user 0m3.411s 00:11:46.659 sys 0m2.189s 00:11:46.659 13:16:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.659 13:16:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:46.659 ************************************ 00:11:46.659 END TEST spdk_dd 00:11:46.659 ************************************ 00:11:46.659 00:11:46.659 real 1m11.867s 00:11:46.659 user 0m47.133s 00:11:46.659 sys 0m28.988s 00:11:46.659 13:16:48 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.659 13:16:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:46.930 13:16:48 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:46.930 13:16:48 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:11:46.930 13:16:48 -- spdk/autotest.sh@256 -- # timing_exit lib 00:11:46.930 13:16:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:46.930 13:16:48 -- common/autotest_common.sh@10 -- # set +x 00:11:46.930 13:16:48 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:11:46.930 13:16:48 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:11:46.930 13:16:48 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:11:46.930 13:16:48 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:11:46.930 13:16:48 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:11:46.930 13:16:48 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:11:46.930 13:16:48 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:46.930 13:16:48 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:46.930 13:16:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.930 13:16:48 -- common/autotest_common.sh@10 -- # set +x 00:11:46.930 ************************************ 00:11:46.930 START TEST nvmf_tcp 00:11:46.930 ************************************ 00:11:46.930 13:16:48 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:46.930 * Looking for test storage... 00:11:46.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:46.930 13:16:48 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:46.930 13:16:48 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:11:46.930 13:16:48 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:46.930 13:16:48 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.930 13:16:48 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.931 13:16:48 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.931 13:16:48 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:11:46.931 13:16:48 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.931 13:16:48 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:46.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.931 --rc genhtml_branch_coverage=1 00:11:46.931 --rc genhtml_function_coverage=1 00:11:46.931 --rc genhtml_legend=1 00:11:46.931 --rc geninfo_all_blocks=1 00:11:46.931 --rc geninfo_unexecuted_blocks=1 00:11:46.931 00:11:46.931 ' 00:11:46.931 13:16:48 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:46.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.931 --rc genhtml_branch_coverage=1 00:11:46.931 --rc genhtml_function_coverage=1 00:11:46.931 --rc genhtml_legend=1 00:11:46.931 --rc geninfo_all_blocks=1 00:11:46.931 --rc geninfo_unexecuted_blocks=1 00:11:46.931 00:11:46.931 ' 00:11:46.931 13:16:48 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:46.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.931 --rc genhtml_branch_coverage=1 00:11:46.931 --rc genhtml_function_coverage=1 00:11:46.931 --rc genhtml_legend=1 00:11:46.931 --rc geninfo_all_blocks=1 00:11:46.931 --rc geninfo_unexecuted_blocks=1 00:11:46.931 00:11:46.931 ' 00:11:46.931 13:16:48 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:46.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.931 --rc genhtml_branch_coverage=1 00:11:46.931 --rc genhtml_function_coverage=1 00:11:46.931 --rc genhtml_legend=1 00:11:46.931 --rc geninfo_all_blocks=1 00:11:46.931 --rc geninfo_unexecuted_blocks=1 00:11:46.931 00:11:46.931 ' 00:11:46.931 13:16:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:46.931 13:16:48 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:46.931 13:16:48 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.931 13:16:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:46.931 ************************************ 00:11:46.931 START TEST nvmf_target_core 00:11:46.931 ************************************ 00:11:46.931 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:47.190 * Looking for test storage... 00:11:47.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:47.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.190 --rc genhtml_branch_coverage=1 00:11:47.190 --rc genhtml_function_coverage=1 00:11:47.190 --rc genhtml_legend=1 00:11:47.190 --rc geninfo_all_blocks=1 00:11:47.190 --rc geninfo_unexecuted_blocks=1 00:11:47.190 00:11:47.190 ' 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:47.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.190 --rc genhtml_branch_coverage=1 00:11:47.190 --rc genhtml_function_coverage=1 00:11:47.190 --rc genhtml_legend=1 00:11:47.190 --rc geninfo_all_blocks=1 00:11:47.190 --rc geninfo_unexecuted_blocks=1 00:11:47.190 00:11:47.190 ' 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:47.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.190 --rc genhtml_branch_coverage=1 00:11:47.190 --rc genhtml_function_coverage=1 00:11:47.190 --rc genhtml_legend=1 00:11:47.190 --rc geninfo_all_blocks=1 00:11:47.190 --rc geninfo_unexecuted_blocks=1 00:11:47.190 00:11:47.190 ' 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:47.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.190 --rc genhtml_branch_coverage=1 00:11:47.190 --rc genhtml_function_coverage=1 00:11:47.190 --rc genhtml_legend=1 00:11:47.190 --rc geninfo_all_blocks=1 00:11:47.190 --rc geninfo_unexecuted_blocks=1 00:11:47.190 00:11:47.190 ' 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.190 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:47.191 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@15 -- # [[ 1 -eq 0 ]] 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:47.191 ************************************ 00:11:47.191 START TEST nvmf_host_management 00:11:47.191 ************************************ 00:11:47.191 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:47.191 * Looking for test storage... 00:11:47.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.450 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:47.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.451 --rc genhtml_branch_coverage=1 00:11:47.451 --rc genhtml_function_coverage=1 00:11:47.451 --rc genhtml_legend=1 00:11:47.451 --rc geninfo_all_blocks=1 00:11:47.451 --rc geninfo_unexecuted_blocks=1 00:11:47.451 00:11:47.451 ' 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:47.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.451 --rc genhtml_branch_coverage=1 00:11:47.451 --rc genhtml_function_coverage=1 00:11:47.451 --rc genhtml_legend=1 00:11:47.451 --rc geninfo_all_blocks=1 00:11:47.451 --rc geninfo_unexecuted_blocks=1 00:11:47.451 00:11:47.451 ' 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:47.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.451 --rc genhtml_branch_coverage=1 00:11:47.451 --rc genhtml_function_coverage=1 00:11:47.451 --rc genhtml_legend=1 00:11:47.451 --rc geninfo_all_blocks=1 00:11:47.451 --rc geninfo_unexecuted_blocks=1 00:11:47.451 00:11:47.451 ' 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:47.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.451 --rc genhtml_branch_coverage=1 00:11:47.451 --rc genhtml_function_coverage=1 00:11:47.451 --rc genhtml_legend=1 00:11:47.451 --rc geninfo_all_blocks=1 00:11:47.451 --rc geninfo_unexecuted_blocks=1 00:11:47.451 00:11:47.451 ' 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:47.451 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:47.452 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # prepare_net_devs 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # local -g is_hw=no 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # remove_target_ns 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@276 -- # nvmf_veth_init 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@233 -- # create_target_ns 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@234 -- # create_main_bridge 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@114 -- # delete_main_bridge 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@130 -- # return 0 00:11:47.452 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@160 -- # set_up initiator0 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@160 -- # set_up target0 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set target0 up 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@161 -- # set_up target0_br 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_ns target0 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:11:47.712 10.0.0.1 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:47.712 10.0.0.2 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@75 -- # set_up initiator0 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:11:47.712 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@138 -- # set_up target0_br 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@160 -- # set_up initiator1 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@160 -- # set_up target1 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set target1 up 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@161 -- # set_up target1_br 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_ns target1 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:11:47.713 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772163 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:11:47.973 10.0.0.3 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772164 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:11:47.973 10.0.0.4 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@75 -- # set_up initiator1 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@138 -- # set_up target1_br 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 2 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:47.973 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo initiator0 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=initiator0 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:47.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:11:47.974 00:11:47.974 --- 10.0.0.1 ping statistics --- 00:11:47.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.974 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo target0 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=target0 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:47.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:11:47.974 00:11:47.974 --- 10.0.0.2 ping statistics --- 00:11:47.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.974 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo initiator1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=initiator1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:11:47.974 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:47.974 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:11:47.974 00:11:47.974 --- 10.0.0.3 ping statistics --- 00:11:47.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.974 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo target1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=target1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:11:47.974 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:47.974 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 00:11:47.974 00:11:47.974 --- 10.0.0.4 ping statistics --- 00:11:47.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.974 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # return 0 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:47.974 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo initiator0 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=initiator0 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo initiator1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=initiator1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo target0 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=target0 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo target1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=target1 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:47.975 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:48.234 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@324 -- # nvmfpid=62272 00:11:48.234 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # waitforlisten 62272 00:11:48.234 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62272 ']' 00:11:48.234 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:48.234 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.234 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:48.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.234 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.234 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:48.234 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:48.234 [2024-09-27 13:16:49.881613] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:48.234 [2024-09-27 13:16:49.881757] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.234 [2024-09-27 13:16:50.022492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.493 [2024-09-27 13:16:50.097874] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.493 [2024-09-27 13:16:50.097935] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.493 [2024-09-27 13:16:50.097959] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.493 [2024-09-27 13:16:50.097969] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.493 [2024-09-27 13:16:50.097979] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.493 [2024-09-27 13:16:50.098137] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.493 [2024-09-27 13:16:50.098844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.493 [2024-09-27 13:16:50.098941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:48.493 [2024-09-27 13:16:50.099087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.493 [2024-09-27 13:16:50.132752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:48.493 [2024-09-27 13:16:50.227852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:48.493 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:48.494 Malloc0 00:11:48.494 [2024-09-27 13:16:50.285636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:48.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62313 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62313 /var/tmp/bdevperf.sock 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62313 ']' 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # config=() 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # local subsystem config 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # for subsystem in "${@:-1}" 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:48.494 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # config+=("$(cat <<-EOF 00:11:48.494 { 00:11:48.494 "params": { 00:11:48.494 "name": "Nvme$subsystem", 00:11:48.494 "trtype": "$TEST_TRANSPORT", 00:11:48.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.494 "adrfam": "ipv4", 00:11:48.494 "trsvcid": "$NVMF_PORT", 00:11:48.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.494 "hdgst": ${hdgst:-false}, 00:11:48.494 "ddgst": ${ddgst:-false} 00:11:48.494 }, 00:11:48.494 "method": "bdev_nvme_attach_controller" 00:11:48.494 } 00:11:48.494 EOF 00:11:48.494 )") 00:11:48.753 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # cat 00:11:48.753 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # jq . 00:11:48.753 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@393 -- # IFS=, 00:11:48.753 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # printf '%s\n' '{ 00:11:48.753 "params": { 00:11:48.753 "name": "Nvme0", 00:11:48.753 "trtype": "tcp", 00:11:48.753 "traddr": "10.0.0.2", 00:11:48.753 "adrfam": "ipv4", 00:11:48.753 "trsvcid": "4420", 00:11:48.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:48.753 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:48.753 "hdgst": false, 00:11:48.753 "ddgst": false 00:11:48.753 }, 00:11:48.753 "method": "bdev_nvme_attach_controller" 00:11:48.753 }' 00:11:48.753 [2024-09-27 13:16:50.388247] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:48.753 [2024-09-27 13:16:50.388336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62313 ] 00:11:48.753 [2024-09-27 13:16:50.556447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.011 [2024-09-27 13:16:50.639889] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.011 [2024-09-27 13:16:50.681420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:49.011 Running I/O for 10 seconds... 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.579 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.839 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:49.839 [2024-09-27 13:16:51.492416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.839 [2024-09-27 13:16:51.492801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.839 [2024-09-27 13:16:51.492810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.492822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.492831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.492843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.492852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.492865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.492874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.492886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.492895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.492907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.492917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.492928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.492938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.492949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.492959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.492970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.492979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.492990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.840 [2024-09-27 13:16:51.493640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.840 [2024-09-27 13:16:51.493649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.493661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.841 [2024-09-27 13:16:51.493670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.493692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.841 [2024-09-27 13:16:51.493704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.493716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.841 [2024-09-27 13:16:51.493725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.493736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.841 [2024-09-27 13:16:51.493746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.493757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.841 [2024-09-27 13:16:51.493767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.493778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.841 [2024-09-27 13:16:51.493787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.493799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.841 [2024-09-27 13:16:51.493808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.493819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.841 [2024-09-27 13:16:51.493839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.493853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.841 [2024-09-27 13:16:51.493862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.493874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.841 [2024-09-27 13:16:51.493884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.493895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72e850 is same with the state(6) to be set 00:11:49.841 [2024-09-27 13:16:51.493946] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x72e850 was disconnected and freed. reset controller. 00:11:49.841 [2024-09-27 13:16:51.494070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.841 [2024-09-27 13:16:51.494088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.494099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.841 [2024-09-27 13:16:51.494109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.494119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.841 [2024-09-27 13:16:51.494129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.494139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.841 [2024-09-27 13:16:51.494148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.841 [2024-09-27 13:16:51.494158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ecc0 is same with the state(6) to be set 00:11:49.841 [2024-09-27 13:16:51.495321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:49.841 task offset: 0 on job bdev=Nvme0n1 fails 00:11:49.841 00:11:49.841 Latency(us) 00:11:49.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.841 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:49.841 Job: Nvme0n1 ended in about 0.70 seconds with error 00:11:49.841 Verification LBA range: start 0x0 length 0x400 00:11:49.841 Nvme0n1 : 0.70 1454.64 90.92 90.92 0.00 40245.75 2234.18 44087.85 00:11:49.841 =================================================================================================================== 00:11:49.841 Total : 1454.64 90.92 90.92 0.00 40245.75 2234.18 44087.85 00:11:49.841 [2024-09-27 13:16:51.497448] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:49.841 [2024-09-27 13:16:51.497475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x72ecc0 (9): Bad file descriptor 00:11:49.841 [2024-09-27 13:16:51.508243] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:50.778 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62313 00:11:50.778 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62313) - No such process 00:11:50.778 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:50.778 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:50.778 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:50.778 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:50.778 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # config=() 00:11:50.778 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # local subsystem config 00:11:50.778 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # for subsystem in "${@:-1}" 00:11:50.778 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # config+=("$(cat <<-EOF 00:11:50.778 { 00:11:50.778 "params": { 00:11:50.778 "name": "Nvme$subsystem", 00:11:50.778 "trtype": "$TEST_TRANSPORT", 00:11:50.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:50.778 "adrfam": "ipv4", 00:11:50.778 "trsvcid": "$NVMF_PORT", 00:11:50.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:50.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:50.778 "hdgst": ${hdgst:-false}, 00:11:50.778 "ddgst": ${ddgst:-false} 00:11:50.778 }, 00:11:50.778 "method": "bdev_nvme_attach_controller" 00:11:50.778 } 00:11:50.778 EOF 00:11:50.778 )") 00:11:50.778 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # cat 00:11:50.778 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # jq . 00:11:50.778 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@393 -- # IFS=, 00:11:50.779 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # printf '%s\n' '{ 00:11:50.779 "params": { 00:11:50.779 "name": "Nvme0", 00:11:50.779 "trtype": "tcp", 00:11:50.779 "traddr": "10.0.0.2", 00:11:50.779 "adrfam": "ipv4", 00:11:50.779 "trsvcid": "4420", 00:11:50.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:50.779 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:50.779 "hdgst": false, 00:11:50.779 "ddgst": false 00:11:50.779 }, 00:11:50.779 "method": "bdev_nvme_attach_controller" 00:11:50.779 }' 00:11:50.779 [2024-09-27 13:16:52.535863] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:50.779 [2024-09-27 13:16:52.535943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62351 ] 00:11:51.037 [2024-09-27 13:16:52.671814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.037 [2024-09-27 13:16:52.742915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.037 [2024-09-27 13:16:52.786091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:51.295 Running I/O for 1 seconds... 00:11:52.271 1472.00 IOPS, 92.00 MiB/s 00:11:52.271 Latency(us) 00:11:52.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:52.271 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:52.271 Verification LBA range: start 0x0 length 0x400 00:11:52.271 Nvme0n1 : 1.02 1508.01 94.25 0.00 0.00 41489.33 3932.16 43849.54 00:11:52.271 =================================================================================================================== 00:11:52.271 Total : 1508.01 94.25 0.00 0.00 41489.33 3932.16 43849.54 00:11:52.271 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:52.271 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:52.271 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:11:52.271 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:52.271 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:52.271 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # nvmfcleanup 00:11:52.271 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:52.530 rmmod nvme_tcp 00:11:52.530 rmmod nvme_fabrics 00:11:52.530 rmmod nvme_keyring 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # '[' -n 62272 ']' 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@333 -- # killprocess 62272 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 62272 ']' 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 62272 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62272 00:11:52.530 killing process with pid 62272 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62272' 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 62272 00:11:52.530 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 62272 00:11:52.788 [2024-09-27 13:16:54.399229] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:52.788 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # nvmf_fini 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@264 -- # local dev 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # continue 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # continue 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@284 -- # iptr 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@538 -- # iptables-save 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@538 -- # iptables-restore 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:52.789 00:11:52.789 real 0m5.642s 00:11:52.789 user 0m20.019s 00:11:52.789 sys 0m1.543s 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.789 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:52.789 ************************************ 00:11:52.789 END TEST nvmf_host_management 00:11:52.789 ************************************ 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:53.048 ************************************ 00:11:53.048 START TEST nvmf_lvol 00:11:53.048 ************************************ 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:53.048 * Looking for test storage... 00:11:53.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:53.048 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:53.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.049 --rc genhtml_branch_coverage=1 00:11:53.049 --rc genhtml_function_coverage=1 00:11:53.049 --rc genhtml_legend=1 00:11:53.049 --rc geninfo_all_blocks=1 00:11:53.049 --rc geninfo_unexecuted_blocks=1 00:11:53.049 00:11:53.049 ' 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:53.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.049 --rc genhtml_branch_coverage=1 00:11:53.049 --rc genhtml_function_coverage=1 00:11:53.049 --rc genhtml_legend=1 00:11:53.049 --rc geninfo_all_blocks=1 00:11:53.049 --rc geninfo_unexecuted_blocks=1 00:11:53.049 00:11:53.049 ' 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:53.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.049 --rc genhtml_branch_coverage=1 00:11:53.049 --rc genhtml_function_coverage=1 00:11:53.049 --rc genhtml_legend=1 00:11:53.049 --rc geninfo_all_blocks=1 00:11:53.049 --rc geninfo_unexecuted_blocks=1 00:11:53.049 00:11:53.049 ' 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:53.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.049 --rc genhtml_branch_coverage=1 00:11:53.049 --rc genhtml_function_coverage=1 00:11:53.049 --rc genhtml_legend=1 00:11:53.049 --rc geninfo_all_blocks=1 00:11:53.049 --rc geninfo_unexecuted_blocks=1 00:11:53.049 00:11:53.049 ' 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:53.049 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # prepare_net_devs 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # local -g is_hw=no 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # remove_target_ns 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@276 -- # nvmf_veth_init 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@233 -- # create_target_ns 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@234 -- # create_main_bridge 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@114 -- # delete_main_bridge 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@130 -- # return 0 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.049 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@160 -- # set_up initiator0 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:11:53.050 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:11:53.309 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:11:53.309 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:11:53.309 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:11:53.309 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:11:53.309 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@160 -- # set_up target0 00:11:53.309 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:11:53.309 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.309 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:11:53.309 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set target0 up 00:11:53.309 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@161 -- # set_up target0_br 00:11:53.309 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:11:53.309 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_ns target0 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:11:53.310 10.0.0.1 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:11:53.310 10.0.0.2 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@75 -- # set_up initiator0 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@138 -- # set_up target0_br 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:11:53.310 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@160 -- # set_up initiator1 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@160 -- # set_up target1 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set target1 up 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@161 -- # set_up target1_br 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_ns target1 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:11:53.310 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772163 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:11:53.311 10.0.0.3 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772164 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:11:53.311 10.0.0.4 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@75 -- # set_up initiator1 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@138 -- # set_up target1_br 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 2 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:53.311 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo initiator0 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=initiator0 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:53.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:11:53.571 00:11:53.571 --- 10.0.0.1 ping statistics --- 00:11:53.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.571 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo target0 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=target0 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:53.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:11:53.571 00:11:53.571 --- 10.0.0.2 ping statistics --- 00:11:53.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.571 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo initiator1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=initiator1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:11:53.571 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:53.571 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:11:53.571 00:11:53.571 --- 10.0.0.3 ping statistics --- 00:11:53.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.571 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo target1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=target1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:11:53.571 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:53.571 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.160 ms 00:11:53.571 00:11:53.571 --- 10.0.0.4 ping statistics --- 00:11:53.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.571 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # return 0 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:53.571 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo initiator0 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=initiator0 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo initiator1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=initiator1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo target0 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=target0 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo target1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=target1 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:53.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@324 -- # nvmfpid=62623 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # waitforlisten 62623 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 62623 ']' 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.572 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:53.572 [2024-09-27 13:16:55.396648] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:11:53.572 [2024-09-27 13:16:55.396759] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.831 [2024-09-27 13:16:55.538996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:53.831 [2024-09-27 13:16:55.597398] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.831 [2024-09-27 13:16:55.597460] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.831 [2024-09-27 13:16:55.597471] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.831 [2024-09-27 13:16:55.597480] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.831 [2024-09-27 13:16:55.597488] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.831 [2024-09-27 13:16:55.597648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.831 [2024-09-27 13:16:55.598167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.831 [2024-09-27 13:16:55.598183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.831 [2024-09-27 13:16:55.627120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:53.831 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:53.831 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:11:53.831 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:11:53.831 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.089 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:54.089 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.089 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:54.347 [2024-09-27 13:16:55.958476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.347 13:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:54.604 13:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:54.604 13:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:54.861 13:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:54.861 13:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:55.120 13:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:55.377 13:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5a16669c-0d11-4a19-ba38-1dbfa093ade9 00:11:55.377 13:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5a16669c-0d11-4a19-ba38-1dbfa093ade9 lvol 20 00:11:55.635 13:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=13ab9cac-d3e7-48ba-a2a5-cbfb3615eba1 00:11:55.635 13:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:55.893 13:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 13ab9cac-d3e7-48ba-a2a5-cbfb3615eba1 00:11:56.151 13:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:56.410 [2024-09-27 13:16:58.055243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.410 13:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:56.669 13:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:56.669 13:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62691 00:11:56.669 13:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:57.602 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 13ab9cac-d3e7-48ba-a2a5-cbfb3615eba1 MY_SNAPSHOT 00:11:58.168 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f9ffea5c-9cb1-4442-bce7-3578f3014063 00:11:58.168 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 13ab9cac-d3e7-48ba-a2a5-cbfb3615eba1 30 00:11:58.427 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f9ffea5c-9cb1-4442-bce7-3578f3014063 MY_CLONE 00:11:58.684 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0951cb10-2a0f-4074-ae85-b1c1ed0b79d1 00:11:58.684 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 0951cb10-2a0f-4074-ae85-b1c1ed0b79d1 00:11:59.253 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62691 00:12:07.375 Initializing NVMe Controllers 00:12:07.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:07.375 Controller IO queue size 128, less than required. 00:12:07.375 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:07.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:07.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:07.375 Initialization complete. Launching workers. 00:12:07.375 ======================================================== 00:12:07.375 Latency(us) 00:12:07.375 Device Information : IOPS MiB/s Average min max 00:12:07.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10553.30 41.22 12132.50 3800.00 75373.47 00:12:07.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10424.00 40.72 12288.27 3287.86 66939.21 00:12:07.375 ======================================================== 00:12:07.375 Total : 20977.30 81.94 12209.91 3287.86 75373.47 00:12:07.375 00:12:07.375 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:07.375 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 13ab9cac-d3e7-48ba-a2a5-cbfb3615eba1 00:12:07.633 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5a16669c-0d11-4a19-ba38-1dbfa093ade9 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # nvmfcleanup 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:07.892 rmmod nvme_tcp 00:12:07.892 rmmod nvme_fabrics 00:12:07.892 rmmod nvme_keyring 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # '[' -n 62623 ']' 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@333 -- # killprocess 62623 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 62623 ']' 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 62623 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62623 00:12:07.892 killing process with pid 62623 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62623' 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 62623 00:12:07.892 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 62623 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # nvmf_fini 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@264 -- # local dev 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:12:08.151 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # continue 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # continue 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@284 -- # iptr 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@538 -- # iptables-save 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@538 -- # iptables-restore 00:12:08.410 ************************************ 00:12:08.410 END TEST nvmf_lvol 00:12:08.410 ************************************ 00:12:08.410 00:12:08.410 real 0m15.453s 00:12:08.410 user 1m4.115s 00:12:08.410 sys 0m4.173s 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:08.410 ************************************ 00:12:08.410 START TEST nvmf_lvs_grow 00:12:08.410 ************************************ 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:08.410 * Looking for test storage... 00:12:08.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:12:08.410 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:08.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.671 --rc genhtml_branch_coverage=1 00:12:08.671 --rc genhtml_function_coverage=1 00:12:08.671 --rc genhtml_legend=1 00:12:08.671 --rc geninfo_all_blocks=1 00:12:08.671 --rc geninfo_unexecuted_blocks=1 00:12:08.671 00:12:08.671 ' 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:08.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.671 --rc genhtml_branch_coverage=1 00:12:08.671 --rc genhtml_function_coverage=1 00:12:08.671 --rc genhtml_legend=1 00:12:08.671 --rc geninfo_all_blocks=1 00:12:08.671 --rc geninfo_unexecuted_blocks=1 00:12:08.671 00:12:08.671 ' 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:08.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.671 --rc genhtml_branch_coverage=1 00:12:08.671 --rc genhtml_function_coverage=1 00:12:08.671 --rc genhtml_legend=1 00:12:08.671 --rc geninfo_all_blocks=1 00:12:08.671 --rc geninfo_unexecuted_blocks=1 00:12:08.671 00:12:08.671 ' 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:08.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.671 --rc genhtml_branch_coverage=1 00:12:08.671 --rc genhtml_function_coverage=1 00:12:08.671 --rc genhtml_legend=1 00:12:08.671 --rc geninfo_all_blocks=1 00:12:08.671 --rc geninfo_unexecuted_blocks=1 00:12:08.671 00:12:08.671 ' 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:08.671 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:08.672 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # prepare_net_devs 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # local -g is_hw=no 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # remove_target_ns 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@276 -- # nvmf_veth_init 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@233 -- # create_target_ns 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@234 -- # create_main_bridge 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # delete_main_bridge 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # return 0 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # set_up initiator0 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # set_up target0 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set target0 up 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@161 -- # set_up target0_br 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_ns target0 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:12:08.672 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:08.673 10.0.0.1 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:08.673 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:12:08.933 10.0.0.2 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@75 -- # set_up initiator0 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@138 -- # set_up target0_br 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:12:08.933 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # set_up initiator1 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # set_up target1 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set target1 up 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@161 -- # set_up target1_br 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_ns target1 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772163 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:12:08.934 10.0.0.3 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772164 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:12:08.934 10.0.0.4 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@75 -- # set_up initiator1 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@138 -- # set_up target1_br 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 2 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo initiator0 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=initiator0 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:08.934 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:08.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:12:08.935 00:12:08.935 --- 10.0.0.1 ping statistics --- 00:12:08.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.935 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo target0 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=target0 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:08.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:12:08.935 00:12:08.935 --- 10.0.0.2 ping statistics --- 00:12:08.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.935 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo initiator1 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=initiator1 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:12:08.935 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:12:08.935 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:08.935 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:12:08.935 00:12:08.935 --- 10.0.0.3 ping statistics --- 00:12:08.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.935 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:09.194 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:09.194 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:09.194 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:09.194 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:09.194 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:09.194 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:09.194 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:09.194 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target1 00:12:09.194 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:09.194 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo target1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=target1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:12:09.195 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:09.195 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:12:09.195 00:12:09.195 --- 10.0.0.4 ping statistics --- 00:12:09.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.195 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # return 0 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo initiator0 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=initiator0 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo initiator1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=initiator1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo target0 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=target0 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo target1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=target1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@324 -- # nvmfpid=63068 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # waitforlisten 63068 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 63068 ']' 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.195 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.196 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:09.196 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.196 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:09.196 [2024-09-27 13:17:10.955714] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:12:09.196 [2024-09-27 13:17:10.956496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.454 [2024-09-27 13:17:11.100082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.454 [2024-09-27 13:17:11.170385] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.454 [2024-09-27 13:17:11.170446] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.454 [2024-09-27 13:17:11.170460] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.454 [2024-09-27 13:17:11.170470] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.454 [2024-09-27 13:17:11.170479] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.454 [2024-09-27 13:17:11.170526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.454 [2024-09-27 13:17:11.203366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:09.454 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:09.454 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:12:09.454 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:12:09.454 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:09.454 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:09.454 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.455 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:10.022 [2024-09-27 13:17:11.609722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:10.022 ************************************ 00:12:10.022 START TEST lvs_grow_clean 00:12:10.022 ************************************ 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:10.022 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:10.281 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:10.281 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:10.539 13:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4e77edde-496b-4e1c-9e4e-60b495203551 00:12:10.539 13:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e77edde-496b-4e1c-9e4e-60b495203551 00:12:10.539 13:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:10.797 13:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:10.797 13:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:10.797 13:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4e77edde-496b-4e1c-9e4e-60b495203551 lvol 150 00:12:11.056 13:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=55dec02e-2dea-4d96-aa87-d13d076f8eb3 00:12:11.056 13:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:11.056 13:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:11.314 [2024-09-27 13:17:13.127643] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:11.314 [2024-09-27 13:17:13.127733] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:11.314 true 00:12:11.314 13:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e77edde-496b-4e1c-9e4e-60b495203551 00:12:11.314 13:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:11.573 13:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:11.573 13:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:11.831 13:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 55dec02e-2dea-4d96-aa87-d13d076f8eb3 00:12:12.090 13:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:12.349 [2024-09-27 13:17:14.132189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.349 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:12.608 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63149 00:12:12.608 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:12.608 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:12.608 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63149 /var/tmp/bdevperf.sock 00:12:12.608 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 63149 ']' 00:12:12.608 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:12.867 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:12.867 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:12.867 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.867 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:12.867 [2024-09-27 13:17:14.500906] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:12:12.867 [2024-09-27 13:17:14.500981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63149 ] 00:12:12.867 [2024-09-27 13:17:14.639807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.126 [2024-09-27 13:17:14.721597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.126 [2024-09-27 13:17:14.759174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:14.062 13:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:14.062 13:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:12:14.062 13:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:14.062 Nvme0n1 00:12:14.320 13:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:14.320 [ 00:12:14.320 { 00:12:14.320 "name": "Nvme0n1", 00:12:14.320 "aliases": [ 00:12:14.320 "55dec02e-2dea-4d96-aa87-d13d076f8eb3" 00:12:14.320 ], 00:12:14.320 "product_name": "NVMe disk", 00:12:14.320 "block_size": 4096, 00:12:14.320 "num_blocks": 38912, 00:12:14.320 "uuid": "55dec02e-2dea-4d96-aa87-d13d076f8eb3", 00:12:14.320 "numa_id": -1, 00:12:14.320 "assigned_rate_limits": { 00:12:14.320 "rw_ios_per_sec": 0, 00:12:14.320 "rw_mbytes_per_sec": 0, 00:12:14.320 "r_mbytes_per_sec": 0, 00:12:14.320 "w_mbytes_per_sec": 0 00:12:14.320 }, 00:12:14.320 "claimed": false, 00:12:14.320 "zoned": false, 00:12:14.320 "supported_io_types": { 00:12:14.320 "read": true, 00:12:14.320 "write": true, 00:12:14.320 "unmap": true, 00:12:14.320 "flush": true, 00:12:14.320 "reset": true, 00:12:14.320 "nvme_admin": true, 00:12:14.320 "nvme_io": true, 00:12:14.320 "nvme_io_md": false, 00:12:14.320 "write_zeroes": true, 00:12:14.320 "zcopy": false, 00:12:14.320 "get_zone_info": false, 00:12:14.320 "zone_management": false, 00:12:14.320 "zone_append": false, 00:12:14.320 "compare": true, 00:12:14.320 "compare_and_write": true, 00:12:14.320 "abort": true, 00:12:14.320 "seek_hole": false, 00:12:14.320 "seek_data": false, 00:12:14.320 "copy": true, 00:12:14.320 "nvme_iov_md": false 00:12:14.320 }, 00:12:14.320 "memory_domains": [ 00:12:14.320 { 00:12:14.320 "dma_device_id": "system", 00:12:14.320 "dma_device_type": 1 00:12:14.320 } 00:12:14.320 ], 00:12:14.320 "driver_specific": { 00:12:14.320 "nvme": [ 00:12:14.320 { 00:12:14.320 "trid": { 00:12:14.320 "trtype": "TCP", 00:12:14.320 "adrfam": "IPv4", 00:12:14.320 "traddr": "10.0.0.2", 00:12:14.320 "trsvcid": "4420", 00:12:14.320 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:14.320 }, 00:12:14.320 "ctrlr_data": { 00:12:14.320 "cntlid": 1, 00:12:14.320 "vendor_id": "0x8086", 00:12:14.320 "model_number": "SPDK bdev Controller", 00:12:14.320 "serial_number": "SPDK0", 00:12:14.320 "firmware_revision": "25.01", 00:12:14.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:14.320 "oacs": { 00:12:14.320 "security": 0, 00:12:14.320 "format": 0, 00:12:14.320 "firmware": 0, 00:12:14.320 "ns_manage": 0 00:12:14.320 }, 00:12:14.320 "multi_ctrlr": true, 00:12:14.320 "ana_reporting": false 00:12:14.320 }, 00:12:14.320 "vs": { 00:12:14.320 "nvme_version": "1.3" 00:12:14.320 }, 00:12:14.320 "ns_data": { 00:12:14.320 "id": 1, 00:12:14.320 "can_share": true 00:12:14.320 } 00:12:14.320 } 00:12:14.320 ], 00:12:14.320 "mp_policy": "active_passive" 00:12:14.320 } 00:12:14.320 } 00:12:14.320 ] 00:12:14.578 13:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63172 00:12:14.578 13:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:14.578 13:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:14.578 Running I/O for 10 seconds... 00:12:15.515 Latency(us) 00:12:15.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.515 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:12:15.515 =================================================================================================================== 00:12:15.515 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:12:15.515 00:12:16.544 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4e77edde-496b-4e1c-9e4e-60b495203551 00:12:16.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.544 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:12:16.544 =================================================================================================================== 00:12:16.544 Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:12:16.544 00:12:16.816 true 00:12:16.816 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:16.816 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e77edde-496b-4e1c-9e4e-60b495203551 00:12:17.075 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:17.075 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:17.075 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63172 00:12:17.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.642 Nvme0n1 : 3.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:12:17.642 =================================================================================================================== 00:12:17.642 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:12:17.642 00:12:18.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.579 Nvme0n1 : 4.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:12:18.579 =================================================================================================================== 00:12:18.579 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:12:18.579 00:12:19.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.515 Nvme0n1 : 5.00 6959.60 27.19 0.00 0.00 0.00 0.00 0.00 00:12:19.515 =================================================================================================================== 00:12:19.515 Total : 6959.60 27.19 0.00 0.00 0.00 0.00 0.00 00:12:19.515 00:12:20.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.451 Nvme0n1 : 6.00 6840.33 26.72 0.00 0.00 0.00 0.00 0.00 00:12:20.451 =================================================================================================================== 00:12:20.451 Total : 6840.33 26.72 0.00 0.00 0.00 0.00 0.00 00:12:20.451 00:12:21.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.826 Nvme0n1 : 7.00 6861.00 26.80 0.00 0.00 0.00 0.00 0.00 00:12:21.826 =================================================================================================================== 00:12:21.826 Total : 6861.00 26.80 0.00 0.00 0.00 0.00 0.00 00:12:21.826 00:12:22.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.760 Nvme0n1 : 8.00 6876.50 26.86 0.00 0.00 0.00 0.00 0.00 00:12:22.760 =================================================================================================================== 00:12:22.760 Total : 6876.50 26.86 0.00 0.00 0.00 0.00 0.00 00:12:22.760 00:12:23.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.695 Nvme0n1 : 9.00 6874.44 26.85 0.00 0.00 0.00 0.00 0.00 00:12:23.695 =================================================================================================================== 00:12:23.695 Total : 6874.44 26.85 0.00 0.00 0.00 0.00 0.00 00:12:23.695 00:12:24.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.650 Nvme0n1 : 10.00 6885.50 26.90 0.00 0.00 0.00 0.00 0.00 00:12:24.650 =================================================================================================================== 00:12:24.650 Total : 6885.50 26.90 0.00 0.00 0.00 0.00 0.00 00:12:24.650 00:12:24.650 00:12:24.650 Latency(us) 00:12:24.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.650 Nvme0n1 : 10.02 6885.73 26.90 0.00 0.00 18583.97 14060.45 123922.62 00:12:24.650 =================================================================================================================== 00:12:24.650 Total : 6885.73 26.90 0.00 0.00 18583.97 14060.45 123922.62 00:12:24.650 { 00:12:24.650 "results": [ 00:12:24.650 { 00:12:24.650 "job": "Nvme0n1", 00:12:24.650 "core_mask": "0x2", 00:12:24.650 "workload": "randwrite", 00:12:24.650 "status": "finished", 00:12:24.650 "queue_depth": 128, 00:12:24.650 "io_size": 4096, 00:12:24.650 "runtime": 10.018257, 00:12:24.650 "iops": 6885.728725066646, 00:12:24.650 "mibps": 26.897377832291586, 00:12:24.650 "io_failed": 0, 00:12:24.650 "io_timeout": 0, 00:12:24.650 "avg_latency_us": 18583.966013866397, 00:12:24.650 "min_latency_us": 14060.450909090909, 00:12:24.650 "max_latency_us": 123922.61818181818 00:12:24.650 } 00:12:24.650 ], 00:12:24.650 "core_count": 1 00:12:24.650 } 00:12:24.650 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63149 00:12:24.650 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 63149 ']' 00:12:24.650 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 63149 00:12:24.650 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:12:24.650 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:24.650 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63149 00:12:24.650 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:24.650 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:24.650 killing process with pid 63149 00:12:24.650 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63149' 00:12:24.650 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 63149 00:12:24.650 Received shutdown signal, test time was about 10.000000 seconds 00:12:24.650 00:12:24.650 Latency(us) 00:12:24.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.650 =================================================================================================================== 00:12:24.650 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:24.650 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 63149 00:12:24.945 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:24.945 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:25.203 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e77edde-496b-4e1c-9e4e-60b495203551 00:12:25.203 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:25.768 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:25.768 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:25.768 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:25.768 [2024-09-27 13:17:27.608328] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:26.026 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e77edde-496b-4e1c-9e4e-60b495203551 00:12:26.026 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:12:26.026 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e77edde-496b-4e1c-9e4e-60b495203551 00:12:26.026 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:26.026 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.026 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:26.026 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.026 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:26.026 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.026 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:26.026 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:26.026 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e77edde-496b-4e1c-9e4e-60b495203551 00:12:26.285 request: 00:12:26.285 { 00:12:26.285 "uuid": "4e77edde-496b-4e1c-9e4e-60b495203551", 00:12:26.285 "method": "bdev_lvol_get_lvstores", 00:12:26.285 "req_id": 1 00:12:26.285 } 00:12:26.285 Got JSON-RPC error response 00:12:26.285 response: 00:12:26.285 { 00:12:26.285 "code": -19, 00:12:26.285 "message": "No such device" 00:12:26.285 } 00:12:26.285 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:12:26.285 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:26.285 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:26.285 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:26.285 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:26.543 aio_bdev 00:12:26.543 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 55dec02e-2dea-4d96-aa87-d13d076f8eb3 00:12:26.543 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=55dec02e-2dea-4d96-aa87-d13d076f8eb3 00:12:26.543 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:26.543 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:12:26.543 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:26.543 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:26.543 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:26.802 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55dec02e-2dea-4d96-aa87-d13d076f8eb3 -t 2000 00:12:27.061 [ 00:12:27.061 { 00:12:27.061 "name": "55dec02e-2dea-4d96-aa87-d13d076f8eb3", 00:12:27.061 "aliases": [ 00:12:27.061 "lvs/lvol" 00:12:27.061 ], 00:12:27.061 "product_name": "Logical Volume", 00:12:27.061 "block_size": 4096, 00:12:27.061 "num_blocks": 38912, 00:12:27.061 "uuid": "55dec02e-2dea-4d96-aa87-d13d076f8eb3", 00:12:27.061 "assigned_rate_limits": { 00:12:27.061 "rw_ios_per_sec": 0, 00:12:27.061 "rw_mbytes_per_sec": 0, 00:12:27.061 "r_mbytes_per_sec": 0, 00:12:27.061 "w_mbytes_per_sec": 0 00:12:27.061 }, 00:12:27.061 "claimed": false, 00:12:27.061 "zoned": false, 00:12:27.061 "supported_io_types": { 00:12:27.061 "read": true, 00:12:27.061 "write": true, 00:12:27.061 "unmap": true, 00:12:27.061 "flush": false, 00:12:27.061 "reset": true, 00:12:27.061 "nvme_admin": false, 00:12:27.061 "nvme_io": false, 00:12:27.061 "nvme_io_md": false, 00:12:27.061 "write_zeroes": true, 00:12:27.061 "zcopy": false, 00:12:27.061 "get_zone_info": false, 00:12:27.061 "zone_management": false, 00:12:27.061 "zone_append": false, 00:12:27.061 "compare": false, 00:12:27.061 "compare_and_write": false, 00:12:27.061 "abort": false, 00:12:27.061 "seek_hole": true, 00:12:27.061 "seek_data": true, 00:12:27.061 "copy": false, 00:12:27.061 "nvme_iov_md": false 00:12:27.061 }, 00:12:27.061 "driver_specific": { 00:12:27.061 "lvol": { 00:12:27.061 "lvol_store_uuid": "4e77edde-496b-4e1c-9e4e-60b495203551", 00:12:27.061 "base_bdev": "aio_bdev", 00:12:27.061 "thin_provision": false, 00:12:27.061 "num_allocated_clusters": 38, 00:12:27.061 "snapshot": false, 00:12:27.061 "clone": false, 00:12:27.061 "esnap_clone": false 00:12:27.061 } 00:12:27.061 } 00:12:27.061 } 00:12:27.061 ] 00:12:27.061 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:12:27.061 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e77edde-496b-4e1c-9e4e-60b495203551 00:12:27.061 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:27.319 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:27.319 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e77edde-496b-4e1c-9e4e-60b495203551 00:12:27.319 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:27.577 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:27.577 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 55dec02e-2dea-4d96-aa87-d13d076f8eb3 00:12:27.836 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4e77edde-496b-4e1c-9e4e-60b495203551 00:12:28.095 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:28.354 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:28.920 ************************************ 00:12:28.920 END TEST lvs_grow_clean 00:12:28.920 ************************************ 00:12:28.920 00:12:28.920 real 0m18.854s 00:12:28.920 user 0m17.949s 00:12:28.920 sys 0m2.568s 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:28.920 ************************************ 00:12:28.920 START TEST lvs_grow_dirty 00:12:28.920 ************************************ 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:28.920 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:29.178 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:29.179 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:29.437 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:29.437 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:29.437 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:30.004 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:30.004 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:30.004 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a3b913ba-d1d4-43d2-8391-277392ab96a6 lvol 150 00:12:30.262 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a05bd13f-e493-403a-8e7e-47529e82e489 00:12:30.262 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:30.262 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:30.262 [2024-09-27 13:17:32.094818] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:30.262 [2024-09-27 13:17:32.094911] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:30.262 true 00:12:30.520 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:30.520 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:30.777 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:30.777 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:31.036 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a05bd13f-e493-403a-8e7e-47529e82e489 00:12:31.294 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:31.553 [2024-09-27 13:17:33.187370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.553 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:31.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:31.813 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63425 00:12:31.813 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:31.813 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:31.813 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63425 /var/tmp/bdevperf.sock 00:12:31.813 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63425 ']' 00:12:31.813 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:31.813 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.813 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:31.813 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.813 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:31.813 [2024-09-27 13:17:33.514416] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:12:31.813 [2024-09-27 13:17:33.514756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63425 ] 00:12:31.813 [2024-09-27 13:17:33.645654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.072 [2024-09-27 13:17:33.734102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.072 [2024-09-27 13:17:33.775297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:32.072 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:32.072 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:12:32.072 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:32.330 Nvme0n1 00:12:32.330 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:32.589 [ 00:12:32.589 { 00:12:32.589 "name": "Nvme0n1", 00:12:32.589 "aliases": [ 00:12:32.589 "a05bd13f-e493-403a-8e7e-47529e82e489" 00:12:32.589 ], 00:12:32.589 "product_name": "NVMe disk", 00:12:32.589 "block_size": 4096, 00:12:32.589 "num_blocks": 38912, 00:12:32.589 "uuid": "a05bd13f-e493-403a-8e7e-47529e82e489", 00:12:32.589 "numa_id": -1, 00:12:32.589 "assigned_rate_limits": { 00:12:32.589 "rw_ios_per_sec": 0, 00:12:32.589 "rw_mbytes_per_sec": 0, 00:12:32.589 "r_mbytes_per_sec": 0, 00:12:32.589 "w_mbytes_per_sec": 0 00:12:32.589 }, 00:12:32.589 "claimed": false, 00:12:32.589 "zoned": false, 00:12:32.589 "supported_io_types": { 00:12:32.589 "read": true, 00:12:32.589 "write": true, 00:12:32.589 "unmap": true, 00:12:32.589 "flush": true, 00:12:32.589 "reset": true, 00:12:32.589 "nvme_admin": true, 00:12:32.589 "nvme_io": true, 00:12:32.589 "nvme_io_md": false, 00:12:32.589 "write_zeroes": true, 00:12:32.589 "zcopy": false, 00:12:32.589 "get_zone_info": false, 00:12:32.589 "zone_management": false, 00:12:32.589 "zone_append": false, 00:12:32.589 "compare": true, 00:12:32.589 "compare_and_write": true, 00:12:32.589 "abort": true, 00:12:32.589 "seek_hole": false, 00:12:32.589 "seek_data": false, 00:12:32.589 "copy": true, 00:12:32.589 "nvme_iov_md": false 00:12:32.589 }, 00:12:32.589 "memory_domains": [ 00:12:32.589 { 00:12:32.589 "dma_device_id": "system", 00:12:32.589 "dma_device_type": 1 00:12:32.589 } 00:12:32.589 ], 00:12:32.589 "driver_specific": { 00:12:32.589 "nvme": [ 00:12:32.589 { 00:12:32.589 "trid": { 00:12:32.589 "trtype": "TCP", 00:12:32.589 "adrfam": "IPv4", 00:12:32.589 "traddr": "10.0.0.2", 00:12:32.589 "trsvcid": "4420", 00:12:32.589 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:32.589 }, 00:12:32.589 "ctrlr_data": { 00:12:32.589 "cntlid": 1, 00:12:32.589 "vendor_id": "0x8086", 00:12:32.589 "model_number": "SPDK bdev Controller", 00:12:32.589 "serial_number": "SPDK0", 00:12:32.589 "firmware_revision": "25.01", 00:12:32.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:32.589 "oacs": { 00:12:32.589 "security": 0, 00:12:32.589 "format": 0, 00:12:32.589 "firmware": 0, 00:12:32.589 "ns_manage": 0 00:12:32.589 }, 00:12:32.589 "multi_ctrlr": true, 00:12:32.589 "ana_reporting": false 00:12:32.589 }, 00:12:32.589 "vs": { 00:12:32.589 "nvme_version": "1.3" 00:12:32.589 }, 00:12:32.589 "ns_data": { 00:12:32.589 "id": 1, 00:12:32.589 "can_share": true 00:12:32.589 } 00:12:32.589 } 00:12:32.589 ], 00:12:32.589 "mp_policy": "active_passive" 00:12:32.589 } 00:12:32.589 } 00:12:32.589 ] 00:12:32.848 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63441 00:12:32.848 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:32.848 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:32.848 Running I/O for 10 seconds... 00:12:33.785 Latency(us) 00:12:33.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:33.785 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:12:33.785 =================================================================================================================== 00:12:33.785 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:12:33.785 00:12:34.719 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:34.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.719 Nvme0n1 : 2.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:12:34.719 =================================================================================================================== 00:12:34.719 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:12:34.719 00:12:34.977 true 00:12:34.977 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:34.978 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:35.236 13:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:35.236 13:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:35.236 13:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63441 00:12:35.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.804 Nvme0n1 : 3.00 7408.33 28.94 0.00 0.00 0.00 0.00 0.00 00:12:35.804 =================================================================================================================== 00:12:35.804 Total : 7408.33 28.94 0.00 0.00 0.00 0.00 0.00 00:12:35.804 00:12:36.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.741 Nvme0n1 : 4.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:12:36.741 =================================================================================================================== 00:12:36.741 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:12:36.741 00:12:38.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.116 Nvme0n1 : 5.00 7340.60 28.67 0.00 0.00 0.00 0.00 0.00 00:12:38.116 =================================================================================================================== 00:12:38.116 Total : 7340.60 28.67 0.00 0.00 0.00 0.00 0.00 00:12:38.116 00:12:39.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.046 Nvme0n1 : 6.00 7149.33 27.93 0.00 0.00 0.00 0.00 0.00 00:12:39.046 =================================================================================================================== 00:12:39.046 Total : 7149.33 27.93 0.00 0.00 0.00 0.00 0.00 00:12:39.046 00:12:39.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.978 Nvme0n1 : 7.00 7089.57 27.69 0.00 0.00 0.00 0.00 0.00 00:12:39.978 =================================================================================================================== 00:12:39.978 Total : 7089.57 27.69 0.00 0.00 0.00 0.00 0.00 00:12:39.978 00:12:40.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.912 Nvme0n1 : 8.00 7076.50 27.64 0.00 0.00 0.00 0.00 0.00 00:12:40.912 =================================================================================================================== 00:12:40.912 Total : 7076.50 27.64 0.00 0.00 0.00 0.00 0.00 00:12:40.912 00:12:41.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:41.848 Nvme0n1 : 9.00 7052.22 27.55 0.00 0.00 0.00 0.00 0.00 00:12:41.848 =================================================================================================================== 00:12:41.848 Total : 7052.22 27.55 0.00 0.00 0.00 0.00 0.00 00:12:41.848 00:12:42.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.782 Nvme0n1 : 10.00 7032.80 27.47 0.00 0.00 0.00 0.00 0.00 00:12:42.782 =================================================================================================================== 00:12:42.783 Total : 7032.80 27.47 0.00 0.00 0.00 0.00 0.00 00:12:42.783 00:12:42.783 00:12:42.783 Latency(us) 00:12:42.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.783 Nvme0n1 : 10.01 7039.89 27.50 0.00 0.00 18176.16 13822.14 152520.15 00:12:42.783 =================================================================================================================== 00:12:42.783 Total : 7039.89 27.50 0.00 0.00 18176.16 13822.14 152520.15 00:12:42.783 { 00:12:42.783 "results": [ 00:12:42.783 { 00:12:42.783 "job": "Nvme0n1", 00:12:42.783 "core_mask": "0x2", 00:12:42.783 "workload": "randwrite", 00:12:42.783 "status": "finished", 00:12:42.783 "queue_depth": 128, 00:12:42.783 "io_size": 4096, 00:12:42.783 "runtime": 10.008114, 00:12:42.783 "iops": 7039.8878350106725, 00:12:42.783 "mibps": 27.49956185551044, 00:12:42.783 "io_failed": 0, 00:12:42.783 "io_timeout": 0, 00:12:42.783 "avg_latency_us": 18176.163730916523, 00:12:42.783 "min_latency_us": 13822.138181818182, 00:12:42.783 "max_latency_us": 152520.14545454545 00:12:42.783 } 00:12:42.783 ], 00:12:42.783 "core_count": 1 00:12:42.783 } 00:12:42.783 13:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63425 00:12:42.783 13:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 63425 ']' 00:12:42.783 13:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 63425 00:12:42.783 13:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:12:42.783 13:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:42.783 13:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63425 00:12:42.783 killing process with pid 63425 00:12:42.783 Received shutdown signal, test time was about 10.000000 seconds 00:12:42.783 00:12:42.783 Latency(us) 00:12:42.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.783 =================================================================================================================== 00:12:42.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:42.783 13:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:42.783 13:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:42.783 13:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63425' 00:12:42.783 13:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 63425 00:12:42.783 13:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 63425 00:12:43.041 13:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:43.300 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:43.558 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:43.558 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63068 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63068 00:12:44.126 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63068 Killed "${NVMF_APP[@]}" "$@" 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:44.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@324 -- # nvmfpid=63580 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@325 -- # waitforlisten 63580 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63580 ']' 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:44.126 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:44.126 [2024-09-27 13:17:45.770714] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:12:44.126 [2024-09-27 13:17:45.771035] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.126 [2024-09-27 13:17:45.905943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.126 [2024-09-27 13:17:45.962536] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.126 [2024-09-27 13:17:45.962844] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.126 [2024-09-27 13:17:45.962991] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.126 [2024-09-27 13:17:45.963139] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.126 [2024-09-27 13:17:45.963180] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.126 [2024-09-27 13:17:45.963300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.384 [2024-09-27 13:17:45.993806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:44.951 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:44.951 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:12:44.951 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:12:44.951 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:44.951 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:44.951 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.951 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:45.518 [2024-09-27 13:17:47.060290] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:45.518 [2024-09-27 13:17:47.060709] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:45.518 [2024-09-27 13:17:47.061011] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:45.518 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:45.518 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a05bd13f-e493-403a-8e7e-47529e82e489 00:12:45.518 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=a05bd13f-e493-403a-8e7e-47529e82e489 00:12:45.518 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:45.518 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:12:45.518 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:45.518 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:45.518 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:45.777 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a05bd13f-e493-403a-8e7e-47529e82e489 -t 2000 00:12:45.777 [ 00:12:45.777 { 00:12:45.777 "name": "a05bd13f-e493-403a-8e7e-47529e82e489", 00:12:45.777 "aliases": [ 00:12:45.777 "lvs/lvol" 00:12:45.777 ], 00:12:45.777 "product_name": "Logical Volume", 00:12:45.777 "block_size": 4096, 00:12:45.777 "num_blocks": 38912, 00:12:45.777 "uuid": "a05bd13f-e493-403a-8e7e-47529e82e489", 00:12:45.777 "assigned_rate_limits": { 00:12:45.777 "rw_ios_per_sec": 0, 00:12:45.777 "rw_mbytes_per_sec": 0, 00:12:45.777 "r_mbytes_per_sec": 0, 00:12:45.777 "w_mbytes_per_sec": 0 00:12:45.777 }, 00:12:45.777 "claimed": false, 00:12:45.777 "zoned": false, 00:12:45.777 "supported_io_types": { 00:12:45.777 "read": true, 00:12:45.777 "write": true, 00:12:45.777 "unmap": true, 00:12:45.777 "flush": false, 00:12:45.777 "reset": true, 00:12:45.778 "nvme_admin": false, 00:12:45.778 "nvme_io": false, 00:12:45.778 "nvme_io_md": false, 00:12:45.778 "write_zeroes": true, 00:12:45.778 "zcopy": false, 00:12:45.778 "get_zone_info": false, 00:12:45.778 "zone_management": false, 00:12:45.778 "zone_append": false, 00:12:45.778 "compare": false, 00:12:45.778 "compare_and_write": false, 00:12:45.778 "abort": false, 00:12:45.778 "seek_hole": true, 00:12:45.778 "seek_data": true, 00:12:45.778 "copy": false, 00:12:45.778 "nvme_iov_md": false 00:12:45.778 }, 00:12:45.778 "driver_specific": { 00:12:45.778 "lvol": { 00:12:45.778 "lvol_store_uuid": "a3b913ba-d1d4-43d2-8391-277392ab96a6", 00:12:45.778 "base_bdev": "aio_bdev", 00:12:45.778 "thin_provision": false, 00:12:45.778 "num_allocated_clusters": 38, 00:12:45.778 "snapshot": false, 00:12:45.778 "clone": false, 00:12:45.778 "esnap_clone": false 00:12:45.778 } 00:12:45.778 } 00:12:45.778 } 00:12:45.778 ] 00:12:46.040 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:12:46.040 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:46.040 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:46.298 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:46.298 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:46.298 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:46.556 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:46.556 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:46.815 [2024-09-27 13:17:48.470262] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:46.815 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:46.815 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:12:46.815 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:46.816 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.816 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.816 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.816 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.816 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.816 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.816 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.816 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:46.816 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:47.074 request: 00:12:47.074 { 00:12:47.074 "uuid": "a3b913ba-d1d4-43d2-8391-277392ab96a6", 00:12:47.074 "method": "bdev_lvol_get_lvstores", 00:12:47.074 "req_id": 1 00:12:47.074 } 00:12:47.074 Got JSON-RPC error response 00:12:47.074 response: 00:12:47.074 { 00:12:47.074 "code": -19, 00:12:47.074 "message": "No such device" 00:12:47.074 } 00:12:47.074 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:12:47.074 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:47.074 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:47.074 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:47.074 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:47.332 aio_bdev 00:12:47.332 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a05bd13f-e493-403a-8e7e-47529e82e489 00:12:47.332 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=a05bd13f-e493-403a-8e7e-47529e82e489 00:12:47.332 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:47.332 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:12:47.332 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:47.332 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:47.332 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:47.590 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a05bd13f-e493-403a-8e7e-47529e82e489 -t 2000 00:12:47.848 [ 00:12:47.848 { 00:12:47.848 "name": "a05bd13f-e493-403a-8e7e-47529e82e489", 00:12:47.848 "aliases": [ 00:12:47.848 "lvs/lvol" 00:12:47.848 ], 00:12:47.848 "product_name": "Logical Volume", 00:12:47.848 "block_size": 4096, 00:12:47.848 "num_blocks": 38912, 00:12:47.848 "uuid": "a05bd13f-e493-403a-8e7e-47529e82e489", 00:12:47.848 "assigned_rate_limits": { 00:12:47.848 "rw_ios_per_sec": 0, 00:12:47.848 "rw_mbytes_per_sec": 0, 00:12:47.848 "r_mbytes_per_sec": 0, 00:12:47.848 "w_mbytes_per_sec": 0 00:12:47.848 }, 00:12:47.848 "claimed": false, 00:12:47.848 "zoned": false, 00:12:47.848 "supported_io_types": { 00:12:47.848 "read": true, 00:12:47.848 "write": true, 00:12:47.848 "unmap": true, 00:12:47.848 "flush": false, 00:12:47.848 "reset": true, 00:12:47.848 "nvme_admin": false, 00:12:47.848 "nvme_io": false, 00:12:47.848 "nvme_io_md": false, 00:12:47.848 "write_zeroes": true, 00:12:47.848 "zcopy": false, 00:12:47.848 "get_zone_info": false, 00:12:47.848 "zone_management": false, 00:12:47.848 "zone_append": false, 00:12:47.848 "compare": false, 00:12:47.848 "compare_and_write": false, 00:12:47.848 "abort": false, 00:12:47.848 "seek_hole": true, 00:12:47.848 "seek_data": true, 00:12:47.848 "copy": false, 00:12:47.848 "nvme_iov_md": false 00:12:47.848 }, 00:12:47.848 "driver_specific": { 00:12:47.848 "lvol": { 00:12:47.848 "lvol_store_uuid": "a3b913ba-d1d4-43d2-8391-277392ab96a6", 00:12:47.848 "base_bdev": "aio_bdev", 00:12:47.848 "thin_provision": false, 00:12:47.848 "num_allocated_clusters": 38, 00:12:47.848 "snapshot": false, 00:12:47.848 "clone": false, 00:12:47.848 "esnap_clone": false 00:12:47.848 } 00:12:47.848 } 00:12:47.848 } 00:12:47.848 ] 00:12:47.848 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:12:47.848 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:47.848 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:48.106 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:48.106 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:48.106 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:48.364 13:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:48.364 13:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a05bd13f-e493-403a-8e7e-47529e82e489 00:12:48.622 13:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a3b913ba-d1d4-43d2-8391-277392ab96a6 00:12:49.188 13:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:49.188 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:49.755 00:12:49.755 real 0m20.852s 00:12:49.755 user 0m43.317s 00:12:49.755 sys 0m7.854s 00:12:49.755 ************************************ 00:12:49.755 END TEST lvs_grow_dirty 00:12:49.755 ************************************ 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:49.755 nvmf_trace.0 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # nvmfcleanup 00:12:49.755 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:12:50.012 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:50.012 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:12:50.012 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:50.012 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:50.012 rmmod nvme_tcp 00:12:50.012 rmmod nvme_fabrics 00:12:50.012 rmmod nvme_keyring 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # '[' -n 63580 ']' 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@333 -- # killprocess 63580 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 63580 ']' 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 63580 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63580 00:12:50.270 killing process with pid 63580 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63580' 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 63580 00:12:50.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 63580 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # nvmf_fini 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@264 -- # local dev 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:12:50.270 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # continue 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # continue 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@284 -- # iptr 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@538 -- # iptables-save 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@538 -- # iptables-restore 00:12:50.529 ************************************ 00:12:50.529 END TEST nvmf_lvs_grow 00:12:50.529 ************************************ 00:12:50.529 00:12:50.529 real 0m42.076s 00:12:50.529 user 1m8.295s 00:12:50.529 sys 0m11.374s 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:50.529 ************************************ 00:12:50.529 START TEST nvmf_bdev_io_wait 00:12:50.529 ************************************ 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:50.529 * Looking for test storage... 00:12:50.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:12:50.529 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:50.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.789 --rc genhtml_branch_coverage=1 00:12:50.789 --rc genhtml_function_coverage=1 00:12:50.789 --rc genhtml_legend=1 00:12:50.789 --rc geninfo_all_blocks=1 00:12:50.789 --rc geninfo_unexecuted_blocks=1 00:12:50.789 00:12:50.789 ' 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:50.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.789 --rc genhtml_branch_coverage=1 00:12:50.789 --rc genhtml_function_coverage=1 00:12:50.789 --rc genhtml_legend=1 00:12:50.789 --rc geninfo_all_blocks=1 00:12:50.789 --rc geninfo_unexecuted_blocks=1 00:12:50.789 00:12:50.789 ' 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:50.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.789 --rc genhtml_branch_coverage=1 00:12:50.789 --rc genhtml_function_coverage=1 00:12:50.789 --rc genhtml_legend=1 00:12:50.789 --rc geninfo_all_blocks=1 00:12:50.789 --rc geninfo_unexecuted_blocks=1 00:12:50.789 00:12:50.789 ' 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:50.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.789 --rc genhtml_branch_coverage=1 00:12:50.789 --rc genhtml_function_coverage=1 00:12:50.789 --rc genhtml_legend=1 00:12:50.789 --rc geninfo_all_blocks=1 00:12:50.789 --rc geninfo_unexecuted_blocks=1 00:12:50.789 00:12:50.789 ' 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:50.789 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:50.790 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # prepare_net_devs 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # local -g is_hw=no 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # remove_target_ns 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@276 -- # nvmf_veth_init 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@233 -- # create_target_ns 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@234 -- # create_main_bridge 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # delete_main_bridge 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # return 0 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # set_up initiator0 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:12:50.790 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # set_up target0 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set target0 up 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@161 -- # set_up target0_br 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_ns target0 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:12:50.791 10.0.0.1 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:12:50.791 10.0.0.2 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@75 -- # set_up initiator0 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:50.791 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@138 -- # set_up target0_br 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # set_up initiator1 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # set_up target1 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set target1 up 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@161 -- # set_up target1_br 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_ns target1 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772163 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:12:51.051 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:12:51.052 10.0.0.3 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772164 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:12:51.052 10.0.0.4 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@75 -- # set_up initiator1 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@138 -- # set_up target1_br 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 2 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo initiator0 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=initiator0 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:51.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:12:51.052 00:12:51.052 --- 10.0.0.1 ping statistics --- 00:12:51.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.052 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo target0 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=target0 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:51.052 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:51.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.031 ms 00:12:51.052 00:12:51.052 --- 10.0.0.2 ping statistics --- 00:12:51.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.053 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo initiator1 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=initiator1 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:12:51.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:51.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:12:51.053 00:12:51.053 --- 10.0.0.3 ping statistics --- 00:12:51.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.053 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target1 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo target1 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=target1 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:12:51.053 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:51.053 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:12:51.053 00:12:51.053 --- 10.0.0.4 ping statistics --- 00:12:51.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.053 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # return 0 00:12:51.053 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo initiator0 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=initiator0 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo initiator1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=initiator1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo target0 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=target0 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo target1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=target1 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@324 -- # nvmfpid=63951 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # waitforlisten 63951 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 63951 ']' 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.312 13:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.312 [2024-09-27 13:17:53.038482] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:12:51.312 [2024-09-27 13:17:53.038578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.572 [2024-09-27 13:17:53.175958] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.572 [2024-09-27 13:17:53.246832] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.572 [2024-09-27 13:17:53.246893] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.572 [2024-09-27 13:17:53.246907] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.572 [2024-09-27 13:17:53.246916] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.572 [2024-09-27 13:17:53.246925] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.572 [2024-09-27 13:17:53.249719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.572 [2024-09-27 13:17:53.250019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.572 [2024-09-27 13:17:53.250649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.572 [2024-09-27 13:17:53.250720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.572 [2024-09-27 13:17:53.384059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.572 [2024-09-27 13:17:53.398715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.572 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.830 Malloc0 00:12:51.830 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.830 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:51.830 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.830 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.830 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.830 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:51.830 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.830 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.830 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.831 [2024-09-27 13:17:53.456801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63979 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63981 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # config=() 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # local subsystem config 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63983 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # for subsystem in "${@:-1}" 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # config=() 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # config+=("$(cat <<-EOF 00:12:51.831 { 00:12:51.831 "params": { 00:12:51.831 "name": "Nvme$subsystem", 00:12:51.831 "trtype": "$TEST_TRANSPORT", 00:12:51.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.831 "adrfam": "ipv4", 00:12:51.831 "trsvcid": "$NVMF_PORT", 00:12:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.831 "hdgst": ${hdgst:-false}, 00:12:51.831 "ddgst": ${ddgst:-false} 00:12:51.831 }, 00:12:51.831 "method": "bdev_nvme_attach_controller" 00:12:51.831 } 00:12:51.831 EOF 00:12:51.831 )") 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # local subsystem config 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # for subsystem in "${@:-1}" 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # config+=("$(cat <<-EOF 00:12:51.831 { 00:12:51.831 "params": { 00:12:51.831 "name": "Nvme$subsystem", 00:12:51.831 "trtype": "$TEST_TRANSPORT", 00:12:51.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.831 "adrfam": "ipv4", 00:12:51.831 "trsvcid": "$NVMF_PORT", 00:12:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.831 "hdgst": ${hdgst:-false}, 00:12:51.831 "ddgst": ${ddgst:-false} 00:12:51.831 }, 00:12:51.831 "method": "bdev_nvme_attach_controller" 00:12:51.831 } 00:12:51.831 EOF 00:12:51.831 )") 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63985 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # cat 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # cat 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # config=() 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # local subsystem config 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # for subsystem in "${@:-1}" 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # config+=("$(cat <<-EOF 00:12:51.831 { 00:12:51.831 "params": { 00:12:51.831 "name": "Nvme$subsystem", 00:12:51.831 "trtype": "$TEST_TRANSPORT", 00:12:51.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.831 "adrfam": "ipv4", 00:12:51.831 "trsvcid": "$NVMF_PORT", 00:12:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.831 "hdgst": ${hdgst:-false}, 00:12:51.831 "ddgst": ${ddgst:-false} 00:12:51.831 }, 00:12:51.831 "method": "bdev_nvme_attach_controller" 00:12:51.831 } 00:12:51.831 EOF 00:12:51.831 )") 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # jq . 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # jq . 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # config=() 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # local subsystem config 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # for subsystem in "${@:-1}" 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # config+=("$(cat <<-EOF 00:12:51.831 { 00:12:51.831 "params": { 00:12:51.831 "name": "Nvme$subsystem", 00:12:51.831 "trtype": "$TEST_TRANSPORT", 00:12:51.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.831 "adrfam": "ipv4", 00:12:51.831 "trsvcid": "$NVMF_PORT", 00:12:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.831 "hdgst": ${hdgst:-false}, 00:12:51.831 "ddgst": ${ddgst:-false} 00:12:51.831 }, 00:12:51.831 "method": "bdev_nvme_attach_controller" 00:12:51.831 } 00:12:51.831 EOF 00:12:51.831 )") 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # cat 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@393 -- # IFS=, 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # printf '%s\n' '{ 00:12:51.831 "params": { 00:12:51.831 "name": "Nvme1", 00:12:51.831 "trtype": "tcp", 00:12:51.831 "traddr": "10.0.0.2", 00:12:51.831 "adrfam": "ipv4", 00:12:51.831 "trsvcid": "4420", 00:12:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.831 "hdgst": false, 00:12:51.831 "ddgst": false 00:12:51.831 }, 00:12:51.831 "method": "bdev_nvme_attach_controller" 00:12:51.831 }' 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # cat 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@393 -- # IFS=, 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # printf '%s\n' '{ 00:12:51.831 "params": { 00:12:51.831 "name": "Nvme1", 00:12:51.831 "trtype": "tcp", 00:12:51.831 "traddr": "10.0.0.2", 00:12:51.831 "adrfam": "ipv4", 00:12:51.831 "trsvcid": "4420", 00:12:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.831 "hdgst": false, 00:12:51.831 "ddgst": false 00:12:51.831 }, 00:12:51.831 "method": "bdev_nvme_attach_controller" 00:12:51.831 }' 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # jq . 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@393 -- # IFS=, 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # printf '%s\n' '{ 00:12:51.831 "params": { 00:12:51.831 "name": "Nvme1", 00:12:51.831 "trtype": "tcp", 00:12:51.831 "traddr": "10.0.0.2", 00:12:51.831 "adrfam": "ipv4", 00:12:51.831 "trsvcid": "4420", 00:12:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.831 "hdgst": false, 00:12:51.831 "ddgst": false 00:12:51.831 }, 00:12:51.831 "method": "bdev_nvme_attach_controller" 00:12:51.831 }' 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # jq . 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@393 -- # IFS=, 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # printf '%s\n' '{ 00:12:51.831 "params": { 00:12:51.831 "name": "Nvme1", 00:12:51.831 "trtype": "tcp", 00:12:51.831 "traddr": "10.0.0.2", 00:12:51.831 "adrfam": "ipv4", 00:12:51.831 "trsvcid": "4420", 00:12:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.831 "hdgst": false, 00:12:51.831 "ddgst": false 00:12:51.831 }, 00:12:51.831 "method": "bdev_nvme_attach_controller" 00:12:51.831 }' 00:12:51.831 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63979 00:12:51.831 [2024-09-27 13:17:53.538497] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:12:51.831 [2024-09-27 13:17:53.538597] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:51.831 [2024-09-27 13:17:53.546616] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:12:51.831 [2024-09-27 13:17:53.546856] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:51.831 [2024-09-27 13:17:53.558506] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:12:51.832 [2024-09-27 13:17:53.558582] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:51.832 [2024-09-27 13:17:53.574575] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:12:51.832 [2024-09-27 13:17:53.574719] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:52.090 [2024-09-27 13:17:53.720098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.090 [2024-09-27 13:17:53.760842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.090 [2024-09-27 13:17:53.776744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:12:52.090 [2024-09-27 13:17:53.809905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:52.090 [2024-09-27 13:17:53.810231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.090 [2024-09-27 13:17:53.829110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:12:52.090 [2024-09-27 13:17:53.854494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.090 [2024-09-27 13:17:53.864948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:52.090 [2024-09-27 13:17:53.865823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:12:52.090 [2024-09-27 13:17:53.908185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:12:52.090 [2024-09-27 13:17:53.912632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:52.348 Running I/O for 1 seconds... 00:12:52.348 [2024-09-27 13:17:53.957508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:52.348 Running I/O for 1 seconds... 00:12:52.348 Running I/O for 1 seconds... 00:12:52.348 Running I/O for 1 seconds... 00:12:53.284 168616.00 IOPS, 658.66 MiB/s 00:12:53.284 Latency(us) 00:12:53.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.284 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:53.284 Nvme1n1 : 1.00 168267.27 657.29 0.00 0.00 756.72 404.01 2040.55 00:12:53.284 =================================================================================================================== 00:12:53.284 Total : 168267.27 657.29 0.00 0.00 756.72 404.01 2040.55 00:12:53.284 9819.00 IOPS, 38.36 MiB/s 00:12:53.284 Latency(us) 00:12:53.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.284 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:53.284 Nvme1n1 : 1.01 9858.78 38.51 0.00 0.00 12920.16 7685.59 19779.96 00:12:53.284 =================================================================================================================== 00:12:53.284 Total : 9858.78 38.51 0.00 0.00 12920.16 7685.59 19779.96 00:12:53.284 8674.00 IOPS, 33.88 MiB/s 00:12:53.284 Latency(us) 00:12:53.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.284 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:53.284 Nvme1n1 : 1.01 8740.62 34.14 0.00 0.00 14572.87 7238.75 25261.15 00:12:53.284 =================================================================================================================== 00:12:53.284 Total : 8740.62 34.14 0.00 0.00 14572.87 7238.75 25261.15 00:12:53.284 7718.00 IOPS, 30.15 MiB/s 00:12:53.284 Latency(us) 00:12:53.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.284 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:53.284 Nvme1n1 : 1.01 7793.39 30.44 0.00 0.00 16346.68 4230.05 26214.40 00:12:53.284 =================================================================================================================== 00:12:53.284 Total : 7793.39 30.44 0.00 0.00 16346.68 4230.05 26214.40 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63981 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63983 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63985 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # nvmfcleanup 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:53.543 rmmod nvme_tcp 00:12:53.543 rmmod nvme_fabrics 00:12:53.543 rmmod nvme_keyring 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # '[' -n 63951 ']' 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@333 -- # killprocess 63951 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 63951 ']' 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 63951 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.543 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63951 00:12:53.543 killing process with pid 63951 00:12:53.544 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:53.544 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:53.544 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63951' 00:12:53.544 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 63951 00:12:53.544 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 63951 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # nvmf_fini 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@264 -- # local dev 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:12:53.801 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # continue 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # continue 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@284 -- # iptr 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # iptables-save 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:12:54.060 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # iptables-restore 00:12:54.060 00:12:54.060 real 0m3.429s 00:12:54.060 user 0m13.525s 00:12:54.060 sys 0m2.226s 00:12:54.061 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.061 ************************************ 00:12:54.061 END TEST nvmf_bdev_io_wait 00:12:54.061 ************************************ 00:12:54.061 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:54.061 13:17:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:54.061 13:17:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:54.061 13:17:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.061 13:17:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:54.061 ************************************ 00:12:54.061 START TEST nvmf_queue_depth 00:12:54.061 ************************************ 00:12:54.061 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:54.061 * Looking for test storage... 00:12:54.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:54.061 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:54.061 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:54.061 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:54.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.318 --rc genhtml_branch_coverage=1 00:12:54.318 --rc genhtml_function_coverage=1 00:12:54.318 --rc genhtml_legend=1 00:12:54.318 --rc geninfo_all_blocks=1 00:12:54.318 --rc geninfo_unexecuted_blocks=1 00:12:54.318 00:12:54.318 ' 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:54.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.318 --rc genhtml_branch_coverage=1 00:12:54.318 --rc genhtml_function_coverage=1 00:12:54.318 --rc genhtml_legend=1 00:12:54.318 --rc geninfo_all_blocks=1 00:12:54.318 --rc geninfo_unexecuted_blocks=1 00:12:54.318 00:12:54.318 ' 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:54.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.318 --rc genhtml_branch_coverage=1 00:12:54.318 --rc genhtml_function_coverage=1 00:12:54.318 --rc genhtml_legend=1 00:12:54.318 --rc geninfo_all_blocks=1 00:12:54.318 --rc geninfo_unexecuted_blocks=1 00:12:54.318 00:12:54.318 ' 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:54.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.318 --rc genhtml_branch_coverage=1 00:12:54.318 --rc genhtml_function_coverage=1 00:12:54.318 --rc genhtml_legend=1 00:12:54.318 --rc geninfo_all_blocks=1 00:12:54.318 --rc geninfo_unexecuted_blocks=1 00:12:54.318 00:12:54.318 ' 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:54.318 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:54.319 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # prepare_net_devs 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # local -g is_hw=no 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # remove_target_ns 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@276 -- # nvmf_veth_init 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@233 -- # create_target_ns 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@234 -- # create_main_bridge 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@114 -- # delete_main_bridge 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@130 -- # return 0 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@160 -- # set_up initiator0 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:12:54.319 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@160 -- # set_up target0 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set target0 up 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@161 -- # set_up target0_br 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_ns target0 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:12:54.319 10.0.0.1 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:12:54.319 10.0.0.2 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@75 -- # set_up initiator0 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@138 -- # set_up target0_br 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:12:54.319 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@160 -- # set_up initiator1 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:12:54.320 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@160 -- # set_up target1 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set target1 up 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@161 -- # set_up target1_br 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_ns target1 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772163 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:12:54.579 10.0.0.3 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772164 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:12:54.579 10.0.0.4 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@75 -- # set_up initiator1 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@138 -- # set_up target1_br 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 2 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo initiator0 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=initiator0 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:54.579 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:54.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:12:54.579 00:12:54.579 --- 10.0.0.1 ping statistics --- 00:12:54.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.580 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo target0 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=target0 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:54.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:12:54.580 00:12:54.580 --- 10.0.0.2 ping statistics --- 00:12:54.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.580 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo initiator1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=initiator1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:12:54.580 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:54.580 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:12:54.580 00:12:54.580 --- 10.0.0.3 ping statistics --- 00:12:54.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.580 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo target1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=target1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:12:54.580 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:54.580 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:12:54.580 00:12:54.580 --- 10.0.0.4 ping statistics --- 00:12:54.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.580 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # return 0 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo initiator0 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=initiator0 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:54.580 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo initiator1 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=initiator1 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo target0 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=target0 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target1 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo target1 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=target1 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:54.581 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@324 -- # nvmfpid=64247 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # waitforlisten 64247 00:12:54.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64247 ']' 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:54.839 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:54.839 [2024-09-27 13:17:56.517411] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:12:54.839 [2024-09-27 13:17:56.517512] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.839 [2024-09-27 13:17:56.664001] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.098 [2024-09-27 13:17:56.732541] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.098 [2024-09-27 13:17:56.732616] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.098 [2024-09-27 13:17:56.732631] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.098 [2024-09-27 13:17:56.732641] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.098 [2024-09-27 13:17:56.732650] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.098 [2024-09-27 13:17:56.732696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.098 [2024-09-27 13:17:56.762867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.098 [2024-09-27 13:17:56.853878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.098 Malloc0 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.098 [2024-09-27 13:17:56.912538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64266 00:12:55.098 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:55.099 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:55.099 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64266 /var/tmp/bdevperf.sock 00:12:55.099 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64266 ']' 00:12:55.099 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:55.099 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:55.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:55.099 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:55.099 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:55.099 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.356 [2024-09-27 13:17:56.983535] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:12:55.356 [2024-09-27 13:17:56.983658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64266 ] 00:12:55.356 [2024-09-27 13:17:57.123161] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.356 [2024-09-27 13:17:57.180928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.615 [2024-09-27 13:17:57.209945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:55.615 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:55.615 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:12:55.615 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:55.615 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.615 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.615 NVMe0n1 00:12:55.615 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.615 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:55.873 Running I/O for 10 seconds... 00:13:05.773 6144.00 IOPS, 24.00 MiB/s 6698.00 IOPS, 26.16 MiB/s 6980.67 IOPS, 27.27 MiB/s 7169.75 IOPS, 28.01 MiB/s 7199.40 IOPS, 28.12 MiB/s 7295.83 IOPS, 28.50 MiB/s 7345.71 IOPS, 28.69 MiB/s 7415.00 IOPS, 28.96 MiB/s 7416.89 IOPS, 28.97 MiB/s 7458.80 IOPS, 29.14 MiB/s 00:13:05.773 Latency(us) 00:13:05.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.773 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:05.773 Verification LBA range: start 0x0 length 0x4000 00:13:05.773 NVMe0n1 : 10.10 7486.59 29.24 0.00 0.00 136042.96 27405.96 115343.36 00:13:05.773 =================================================================================================================== 00:13:05.773 Total : 7486.59 29.24 0.00 0.00 136042.96 27405.96 115343.36 00:13:05.773 { 00:13:05.773 "results": [ 00:13:05.773 { 00:13:05.773 "job": "NVMe0n1", 00:13:05.773 "core_mask": "0x1", 00:13:05.773 "workload": "verify", 00:13:05.773 "status": "finished", 00:13:05.773 "verify_range": { 00:13:05.773 "start": 0, 00:13:05.773 "length": 16384 00:13:05.773 }, 00:13:05.773 "queue_depth": 1024, 00:13:05.773 "io_size": 4096, 00:13:05.773 "runtime": 10.099656, 00:13:05.773 "iops": 7486.591622526549, 00:13:05.773 "mibps": 29.244498525494333, 00:13:05.773 "io_failed": 0, 00:13:05.773 "io_timeout": 0, 00:13:05.773 "avg_latency_us": 136042.96373466455, 00:13:05.773 "min_latency_us": 27405.963636363635, 00:13:05.773 "max_latency_us": 115343.36 00:13:05.773 } 00:13:05.773 ], 00:13:05.773 "core_count": 1 00:13:05.773 } 00:13:05.773 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64266 00:13:05.773 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64266 ']' 00:13:05.773 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64266 00:13:05.773 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:13:05.773 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:05.773 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64266 00:13:05.773 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:05.774 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:05.774 killing process with pid 64266 00:13:05.774 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64266' 00:13:05.774 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64266 00:13:05.774 Received shutdown signal, test time was about 10.000000 seconds 00:13:05.774 00:13:05.774 Latency(us) 00:13:05.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.774 =================================================================================================================== 00:13:05.774 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:05.774 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64266 00:13:06.033 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:06.033 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:06.033 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # nvmfcleanup 00:13:06.033 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:13:06.033 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:06.033 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:13:06.033 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:06.033 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:06.033 rmmod nvme_tcp 00:13:06.033 rmmod nvme_fabrics 00:13:06.033 rmmod nvme_keyring 00:13:06.033 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # '[' -n 64247 ']' 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@333 -- # killprocess 64247 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64247 ']' 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64247 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64247 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64247' 00:13:06.292 killing process with pid 64247 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64247 00:13:06.292 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64247 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # nvmf_fini 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@264 -- # local dev 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:13:06.292 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # continue 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # continue 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@284 -- # iptr 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@538 -- # iptables-restore 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@538 -- # iptables-save 00:13:06.551 00:13:06.551 real 0m12.520s 00:13:06.551 user 0m21.488s 00:13:06.551 sys 0m2.034s 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.551 ************************************ 00:13:06.551 END TEST nvmf_queue_depth 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:06.551 ************************************ 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:06.551 ************************************ 00:13:06.551 START TEST nvmf_nmic 00:13:06.551 ************************************ 00:13:06.551 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:06.551 * Looking for test storage... 00:13:06.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:06.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.812 --rc genhtml_branch_coverage=1 00:13:06.812 --rc genhtml_function_coverage=1 00:13:06.812 --rc genhtml_legend=1 00:13:06.812 --rc geninfo_all_blocks=1 00:13:06.812 --rc geninfo_unexecuted_blocks=1 00:13:06.812 00:13:06.812 ' 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:06.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.812 --rc genhtml_branch_coverage=1 00:13:06.812 --rc genhtml_function_coverage=1 00:13:06.812 --rc genhtml_legend=1 00:13:06.812 --rc geninfo_all_blocks=1 00:13:06.812 --rc geninfo_unexecuted_blocks=1 00:13:06.812 00:13:06.812 ' 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:06.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.812 --rc genhtml_branch_coverage=1 00:13:06.812 --rc genhtml_function_coverage=1 00:13:06.812 --rc genhtml_legend=1 00:13:06.812 --rc geninfo_all_blocks=1 00:13:06.812 --rc geninfo_unexecuted_blocks=1 00:13:06.812 00:13:06.812 ' 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:06.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.812 --rc genhtml_branch_coverage=1 00:13:06.812 --rc genhtml_function_coverage=1 00:13:06.812 --rc genhtml_legend=1 00:13:06.812 --rc geninfo_all_blocks=1 00:13:06.812 --rc geninfo_unexecuted_blocks=1 00:13:06.812 00:13:06.812 ' 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:06.812 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:06.813 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # prepare_net_devs 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # local -g is_hw=no 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # remove_target_ns 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@276 -- # nvmf_veth_init 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@233 -- # create_target_ns 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@234 -- # create_main_bridge 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@114 -- # delete_main_bridge 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@130 -- # return 0 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:13:06.813 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@160 -- # set_up initiator0 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@160 -- # set_up target0 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set target0 up 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@161 -- # set_up target0_br 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_ns target0 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:06.814 10.0.0.1 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:13:06.814 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:13:07.074 10.0.0.2 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@75 -- # set_up initiator0 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:13:07.074 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@138 -- # set_up target0_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@160 -- # set_up initiator1 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@160 -- # set_up target1 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set target1 up 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@161 -- # set_up target1_br 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_ns target1 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772163 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:13:07.075 10.0.0.3 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772164 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:13:07.075 10.0.0.4 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@75 -- # set_up initiator1 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:13:07.075 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@138 -- # set_up target1_br 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 2 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo initiator0 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=initiator0 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:07.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:13:07.076 00:13:07.076 --- 10.0.0.1 ping statistics --- 00:13:07.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.076 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo target0 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=target0 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:07.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:13:07.076 00:13:07.076 --- 10.0.0.2 ping statistics --- 00:13:07.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.076 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:07.076 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:07.077 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo initiator1 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=initiator1 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:13:07.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:07.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:13:07.336 00:13:07.336 --- 10.0.0.3 ping statistics --- 00:13:07.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.336 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target1 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo target1 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=target1 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:07.336 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:13:07.337 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:07.337 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:13:07.337 00:13:07.337 --- 10.0.0.4 ping statistics --- 00:13:07.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.337 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # return 0 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo initiator0 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=initiator0 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo initiator1 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=initiator1 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo target0 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=target0 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:07.337 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target1 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo target1 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=target1 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:07.337 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@324 -- # nvmfpid=64633 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # waitforlisten 64633 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 64633 ']' 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:07.338 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.338 [2024-09-27 13:18:09.109848] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:13:07.338 [2024-09-27 13:18:09.109961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.597 [2024-09-27 13:18:09.250600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.597 [2024-09-27 13:18:09.310460] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.597 [2024-09-27 13:18:09.310519] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.597 [2024-09-27 13:18:09.310531] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.597 [2024-09-27 13:18:09.310539] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.597 [2024-09-27 13:18:09.310547] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.597 [2024-09-27 13:18:09.310725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.597 [2024-09-27 13:18:09.310778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.597 [2024-09-27 13:18:09.310889] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.597 [2024-09-27 13:18:09.311488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.597 [2024-09-27 13:18:09.340036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:07.597 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.597 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:13:07.597 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:13:07.597 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:07.597 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.597 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.597 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:07.597 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.597 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.856 [2024-09-27 13:18:09.444514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.856 Malloc0 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.856 [2024-09-27 13:18:09.495157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.856 test case1: single bdev can't be used in multiple subsystems 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.856 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.856 [2024-09-27 13:18:09.519013] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:07.856 [2024-09-27 13:18:09.519051] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:07.856 [2024-09-27 13:18:09.519063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.856 request: 00:13:07.856 { 00:13:07.856 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:07.856 "namespace": { 00:13:07.856 "bdev_name": "Malloc0", 00:13:07.856 "no_auto_visible": false 00:13:07.856 }, 00:13:07.856 "method": "nvmf_subsystem_add_ns", 00:13:07.856 "req_id": 1 00:13:07.856 } 00:13:07.856 Got JSON-RPC error response 00:13:07.856 response: 00:13:07.856 { 00:13:07.856 "code": -32602, 00:13:07.856 "message": "Invalid parameters" 00:13:07.856 } 00:13:07.857 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:07.857 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:07.857 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:07.857 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:07.857 Adding namespace failed - expected result. 00:13:07.857 test case2: host connect to nvmf target in multiple paths 00:13:07.857 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:07.857 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:07.857 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.857 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.857 [2024-09-27 13:18:09.531132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:07.857 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.857 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid=1dd592da-03b1-46ba-b90a-3aebb25e3723 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.286 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid=1dd592da-03b1-46ba-b90a-3aebb25e3723 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:09.286 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.286 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:09.286 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.286 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:09.286 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:11.820 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:11.820 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:11.820 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.820 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:11.820 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.820 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:11.820 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:11.820 [global] 00:13:11.820 thread=1 00:13:11.820 invalidate=1 00:13:11.820 rw=write 00:13:11.820 time_based=1 00:13:11.820 runtime=1 00:13:11.820 ioengine=libaio 00:13:11.820 direct=1 00:13:11.820 bs=4096 00:13:11.820 iodepth=1 00:13:11.820 norandommap=0 00:13:11.820 numjobs=1 00:13:11.820 00:13:11.820 verify_dump=1 00:13:11.820 verify_backlog=512 00:13:11.820 verify_state_save=0 00:13:11.820 do_verify=1 00:13:11.820 verify=crc32c-intel 00:13:11.820 [job0] 00:13:11.820 filename=/dev/nvme0n1 00:13:11.820 Could not set queue depth (nvme0n1) 00:13:11.820 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:11.820 fio-3.35 00:13:11.820 Starting 1 thread 00:13:12.757 00:13:12.757 job0: (groupid=0, jobs=1): err= 0: pid=64728: Fri Sep 27 13:18:14 2024 00:13:12.757 read: IOPS=2978, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec) 00:13:12.757 slat (nsec): min=13886, max=55056, avg=15640.24, stdev=2579.96 00:13:12.757 clat (usec): min=143, max=2473, avg=181.01, stdev=45.19 00:13:12.757 lat (usec): min=161, max=2490, avg=196.65, stdev=45.34 00:13:12.757 clat percentiles (usec): 00:13:12.757 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:13:12.757 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 184], 00:13:12.757 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 202], 00:13:12.757 | 99.00th=[ 215], 99.50th=[ 217], 99.90th=[ 231], 99.95th=[ 709], 00:13:12.757 | 99.99th=[ 2474] 00:13:12.757 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:12.757 slat (usec): min=18, max=109, avg=21.99, stdev= 4.46 00:13:12.757 clat (usec): min=88, max=237, avg=109.15, stdev=10.40 00:13:12.757 lat (usec): min=108, max=347, avg=131.14, stdev=12.54 00:13:12.757 clat percentiles (usec): 00:13:12.757 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 101], 00:13:12.757 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:13:12.757 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 122], 95.00th=[ 128], 00:13:12.757 | 99.00th=[ 143], 99.50th=[ 151], 99.90th=[ 161], 99.95th=[ 165], 00:13:12.757 | 99.99th=[ 237] 00:13:12.757 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:13:12.757 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:12.757 lat (usec) : 100=8.87%, 250=91.10%, 750=0.02% 00:13:12.757 lat (msec) : 4=0.02% 00:13:12.757 cpu : usr=3.10%, sys=8.30%, ctx=6053, majf=0, minf=5 00:13:12.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:12.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.757 issued rwts: total=2981,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:12.757 00:13:12.757 Run status group 0 (all jobs): 00:13:12.757 READ: bw=11.6MiB/s (12.2MB/s), 11.6MiB/s-11.6MiB/s (12.2MB/s-12.2MB/s), io=11.6MiB (12.2MB), run=1001-1001msec 00:13:12.757 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:13:12.757 00:13:12.757 Disk stats (read/write): 00:13:12.757 nvme0n1: ios=2610/2928, merge=0/0, ticks=493/340, in_queue=833, util=91.38% 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # nvmfcleanup 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:12.757 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:12.757 rmmod nvme_tcp 00:13:12.757 rmmod nvme_fabrics 00:13:13.016 rmmod nvme_keyring 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # '[' -n 64633 ']' 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@333 -- # killprocess 64633 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 64633 ']' 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 64633 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64633 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64633' 00:13:13.016 killing process with pid 64633 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 64633 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 64633 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # nvmf_fini 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@264 -- # local dev 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:13.016 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:13.276 13:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # continue 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # continue 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@284 -- # iptr 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@538 -- # iptables-save 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@538 -- # iptables-restore 00:13:13.276 00:13:13.276 real 0m6.697s 00:13:13.276 user 0m20.481s 00:13:13.276 sys 0m2.731s 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.276 ************************************ 00:13:13.276 END TEST nvmf_nmic 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:13.276 ************************************ 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:13.276 ************************************ 00:13:13.276 START TEST nvmf_fio_target 00:13:13.276 ************************************ 00:13:13.276 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:13.276 * Looking for test storage... 00:13:13.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:13.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.536 --rc genhtml_branch_coverage=1 00:13:13.536 --rc genhtml_function_coverage=1 00:13:13.536 --rc genhtml_legend=1 00:13:13.536 --rc geninfo_all_blocks=1 00:13:13.536 --rc geninfo_unexecuted_blocks=1 00:13:13.536 00:13:13.536 ' 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:13.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.536 --rc genhtml_branch_coverage=1 00:13:13.536 --rc genhtml_function_coverage=1 00:13:13.536 --rc genhtml_legend=1 00:13:13.536 --rc geninfo_all_blocks=1 00:13:13.536 --rc geninfo_unexecuted_blocks=1 00:13:13.536 00:13:13.536 ' 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:13.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.536 --rc genhtml_branch_coverage=1 00:13:13.536 --rc genhtml_function_coverage=1 00:13:13.536 --rc genhtml_legend=1 00:13:13.536 --rc geninfo_all_blocks=1 00:13:13.536 --rc geninfo_unexecuted_blocks=1 00:13:13.536 00:13:13.536 ' 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:13.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.536 --rc genhtml_branch_coverage=1 00:13:13.536 --rc genhtml_function_coverage=1 00:13:13.536 --rc genhtml_legend=1 00:13:13.536 --rc geninfo_all_blocks=1 00:13:13.536 --rc geninfo_unexecuted_blocks=1 00:13:13.536 00:13:13.536 ' 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.536 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:13.537 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # prepare_net_devs 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # local -g is_hw=no 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # remove_target_ns 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@276 -- # nvmf_veth_init 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@233 -- # create_target_ns 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@234 -- # create_main_bridge 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@114 -- # delete_main_bridge 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@130 -- # return 0 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@160 -- # set_up initiator0 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@160 -- # set_up target0 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set target0 up 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@161 -- # set_up target0_br 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:13:13.537 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_ns target0 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:13:13.538 10.0.0.1 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:13:13.538 10.0.0.2 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@75 -- # set_up initiator0 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:13:13.538 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@138 -- # set_up target0_br 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:13:13.798 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@160 -- # set_up initiator1 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@160 -- # set_up target1 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set target1 up 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@161 -- # set_up target1_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_ns target1 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772163 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:13:13.799 10.0.0.3 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772164 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:13:13.799 10.0.0.4 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@75 -- # set_up initiator1 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@138 -- # set_up target1_br 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo initiator0 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=initiator0 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:13:13.799 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:13.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:13:13.800 00:13:13.800 --- 10.0.0.1 ping statistics --- 00:13:13.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.800 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo target0 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=target0 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:13.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.024 ms 00:13:13.800 00:13:13.800 --- 10.0.0.2 ping statistics --- 00:13:13.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.800 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo initiator1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=initiator1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:13:13.800 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:13.800 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:13:13.800 00:13:13.800 --- 10.0.0.3 ping statistics --- 00:13:13.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.800 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo target1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=target1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:13:13.800 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:13.800 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:13:13.800 00:13:13.800 --- 10.0.0.4 ping statistics --- 00:13:13.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.800 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # return 0 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo initiator0 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=initiator0 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:13.800 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo initiator1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=initiator1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo target0 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=target0 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo target1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=target1 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:13:13.801 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:13:14.060 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:14.060 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:13:14.060 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:14.060 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.060 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@324 -- # nvmfpid=64965 00:13:14.060 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:14.060 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # waitforlisten 64965 00:13:14.060 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 64965 ']' 00:13:14.060 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.060 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:14.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.060 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.060 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:14.060 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.060 [2024-09-27 13:18:15.715236] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:13:14.060 [2024-09-27 13:18:15.715333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.060 [2024-09-27 13:18:15.851659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.318 [2024-09-27 13:18:15.910058] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.318 [2024-09-27 13:18:15.910129] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.318 [2024-09-27 13:18:15.910141] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.318 [2024-09-27 13:18:15.910149] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.318 [2024-09-27 13:18:15.910156] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.318 [2024-09-27 13:18:15.910222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.318 [2024-09-27 13:18:15.910364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.318 [2024-09-27 13:18:15.910883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.318 [2024-09-27 13:18:15.910890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.318 [2024-09-27 13:18:15.940830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:15.254 13:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.255 13:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:13:15.255 13:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:13:15.255 13:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.255 13:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.255 13:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.255 13:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:15.255 [2024-09-27 13:18:17.049430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.255 13:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:15.822 13:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:15.822 13:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:15.822 13:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:16.081 13:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:16.338 13:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:16.338 13:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:16.597 13:18:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:16.597 13:18:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:16.857 13:18:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.116 13:18:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:17.116 13:18:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.374 13:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:17.374 13:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.633 13:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:17.633 13:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:17.891 13:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:18.148 13:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:18.149 13:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:18.408 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:18.408 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.667 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.926 [2024-09-27 13:18:20.725512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.926 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:19.185 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:19.443 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid=1dd592da-03b1-46ba-b90a-3aebb25e3723 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.701 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:19.701 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:19.701 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.701 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:19.701 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:19.701 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:21.633 13:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:21.633 13:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:21.633 13:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.633 13:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:21.633 13:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.633 13:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:21.633 13:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:21.633 [global] 00:13:21.633 thread=1 00:13:21.633 invalidate=1 00:13:21.633 rw=write 00:13:21.633 time_based=1 00:13:21.633 runtime=1 00:13:21.633 ioengine=libaio 00:13:21.633 direct=1 00:13:21.633 bs=4096 00:13:21.633 iodepth=1 00:13:21.633 norandommap=0 00:13:21.633 numjobs=1 00:13:21.633 00:13:21.633 verify_dump=1 00:13:21.633 verify_backlog=512 00:13:21.633 verify_state_save=0 00:13:21.633 do_verify=1 00:13:21.633 verify=crc32c-intel 00:13:21.633 [job0] 00:13:21.633 filename=/dev/nvme0n1 00:13:21.633 [job1] 00:13:21.633 filename=/dev/nvme0n2 00:13:21.633 [job2] 00:13:21.633 filename=/dev/nvme0n3 00:13:21.633 [job3] 00:13:21.633 filename=/dev/nvme0n4 00:13:21.891 Could not set queue depth (nvme0n1) 00:13:21.891 Could not set queue depth (nvme0n2) 00:13:21.891 Could not set queue depth (nvme0n3) 00:13:21.891 Could not set queue depth (nvme0n4) 00:13:21.891 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:21.891 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:21.891 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:21.891 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:21.891 fio-3.35 00:13:21.891 Starting 4 threads 00:13:23.265 00:13:23.265 job0: (groupid=0, jobs=1): err= 0: pid=65149: Fri Sep 27 13:18:24 2024 00:13:23.265 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:13:23.265 slat (nsec): min=13117, max=55432, avg=16930.14, stdev=4653.10 00:13:23.265 clat (usec): min=204, max=787, avg=245.19, stdev=27.17 00:13:23.265 lat (usec): min=219, max=805, avg=262.12, stdev=28.44 00:13:23.265 clat percentiles (usec): 00:13:23.265 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 231], 00:13:23.265 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:13:23.265 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:13:23.265 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 652], 99.95th=[ 660], 00:13:23.265 | 99.99th=[ 791] 00:13:23.265 write: IOPS=2118, BW=8476KiB/s (8679kB/s)(8484KiB/1001msec); 0 zone resets 00:13:23.265 slat (usec): min=11, max=103, avg=22.70, stdev= 5.17 00:13:23.265 clat (usec): min=118, max=592, avg=191.93, stdev=26.26 00:13:23.265 lat (usec): min=138, max=616, avg=214.63, stdev=27.25 00:13:23.265 clat percentiles (usec): 00:13:23.265 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:13:23.265 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:13:23.265 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 219], 00:13:23.265 | 99.00th=[ 239], 99.50th=[ 262], 99.90th=[ 562], 99.95th=[ 562], 00:13:23.265 | 99.99th=[ 594] 00:13:23.265 bw ( KiB/s): min= 8192, max= 8192, per=20.11%, avg=8192.00, stdev= 0.00, samples=1 00:13:23.265 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:23.265 lat (usec) : 250=83.93%, 500=15.78%, 750=0.26%, 1000=0.02% 00:13:23.265 cpu : usr=1.70%, sys=7.40%, ctx=4169, majf=0, minf=5 00:13:23.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.265 issued rwts: total=2048,2121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.265 job1: (groupid=0, jobs=1): err= 0: pid=65150: Fri Sep 27 13:18:24 2024 00:13:23.265 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:13:23.265 slat (nsec): min=8416, max=34555, avg=11218.67, stdev=3401.67 00:13:23.265 clat (usec): min=182, max=820, avg=251.45, stdev=28.43 00:13:23.265 lat (usec): min=195, max=831, avg=262.66, stdev=29.24 00:13:23.265 clat percentiles (usec): 00:13:23.265 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:13:23.265 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:13:23.265 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:13:23.265 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 676], 99.95th=[ 676], 00:13:23.265 | 99.99th=[ 824] 00:13:23.265 write: IOPS=2119, BW=8480KiB/s (8683kB/s)(8488KiB/1001msec); 0 zone resets 00:13:23.265 slat (nsec): min=10594, max=69084, avg=19556.72, stdev=7450.11 00:13:23.265 clat (usec): min=89, max=607, avg=195.30, stdev=26.52 00:13:23.265 lat (usec): min=119, max=624, avg=214.85, stdev=27.50 00:13:23.265 clat percentiles (usec): 00:13:23.265 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:13:23.265 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:13:23.265 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 223], 00:13:23.265 | 99.00th=[ 241], 99.50th=[ 277], 99.90th=[ 562], 99.95th=[ 586], 00:13:23.265 | 99.99th=[ 611] 00:13:23.265 bw ( KiB/s): min= 8192, max= 8192, per=20.11%, avg=8192.00, stdev= 0.00, samples=1 00:13:23.265 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:23.265 lat (usec) : 100=0.02%, 250=77.27%, 500=22.42%, 750=0.26%, 1000=0.02% 00:13:23.265 cpu : usr=1.70%, sys=5.10%, ctx=4170, majf=0, minf=11 00:13:23.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.265 issued rwts: total=2048,2122,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.265 job2: (groupid=0, jobs=1): err= 0: pid=65151: Fri Sep 27 13:18:24 2024 00:13:23.265 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:23.265 slat (nsec): min=12997, max=49107, avg=15666.18, stdev=2874.05 00:13:23.265 clat (usec): min=149, max=243, avg=181.59, stdev=12.51 00:13:23.265 lat (usec): min=164, max=259, avg=197.26, stdev=13.00 00:13:23.265 clat percentiles (usec): 00:13:23.265 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:13:23.265 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:13:23.265 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 204], 00:13:23.265 | 99.00th=[ 215], 99.50th=[ 219], 99.90th=[ 227], 99.95th=[ 233], 00:13:23.265 | 99.99th=[ 245] 00:13:23.265 write: IOPS=2875, BW=11.2MiB/s (11.8MB/s)(11.2MiB/1001msec); 0 zone resets 00:13:23.265 slat (usec): min=16, max=102, avg=22.37, stdev= 4.92 00:13:23.265 clat (usec): min=108, max=5040, avg=146.22, stdev=164.57 00:13:23.266 lat (usec): min=129, max=5063, avg=168.59, stdev=164.97 00:13:23.266 clat percentiles (usec): 00:13:23.266 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 125], 20.00th=[ 128], 00:13:23.266 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:13:23.266 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:13:23.266 | 99.00th=[ 178], 99.50th=[ 258], 99.90th=[ 3654], 99.95th=[ 3752], 00:13:23.266 | 99.99th=[ 5014] 00:13:23.266 bw ( KiB/s): min=12288, max=12288, per=30.17%, avg=12288.00, stdev= 0.00, samples=1 00:13:23.266 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:23.266 lat (usec) : 250=99.72%, 500=0.07%, 750=0.06% 00:13:23.266 lat (msec) : 2=0.02%, 4=0.11%, 10=0.02% 00:13:23.266 cpu : usr=2.10%, sys=8.40%, ctx=5438, majf=0, minf=9 00:13:23.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.266 issued rwts: total=2560,2878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.266 job3: (groupid=0, jobs=1): err= 0: pid=65152: Fri Sep 27 13:18:24 2024 00:13:23.266 read: IOPS=2566, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:23.266 slat (nsec): min=12972, max=35296, avg=15748.02, stdev=2086.43 00:13:23.266 clat (usec): min=149, max=281, avg=180.15, stdev=13.85 00:13:23.266 lat (usec): min=163, max=296, avg=195.90, stdev=14.03 00:13:23.266 clat percentiles (usec): 00:13:23.266 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:13:23.266 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:13:23.266 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:13:23.266 | 99.00th=[ 219], 99.50th=[ 229], 99.90th=[ 260], 99.95th=[ 273], 00:13:23.266 | 99.99th=[ 281] 00:13:23.266 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:23.266 slat (usec): min=16, max=103, avg=22.46, stdev= 4.56 00:13:23.266 clat (usec): min=100, max=523, avg=136.02, stdev=13.64 00:13:23.266 lat (usec): min=120, max=548, avg=158.47, stdev=14.65 00:13:23.266 clat percentiles (usec): 00:13:23.266 | 1.00th=[ 113], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 127], 00:13:23.266 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:13:23.266 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:13:23.266 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 217], 99.95th=[ 318], 00:13:23.266 | 99.99th=[ 523] 00:13:23.266 bw ( KiB/s): min=12288, max=12288, per=30.17%, avg=12288.00, stdev= 0.00, samples=1 00:13:23.266 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:23.266 lat (usec) : 250=99.88%, 500=0.11%, 750=0.02% 00:13:23.266 cpu : usr=1.90%, sys=8.90%, ctx=5641, majf=0, minf=14 00:13:23.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.266 issued rwts: total=2569,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.266 00:13:23.266 Run status group 0 (all jobs): 00:13:23.266 READ: bw=36.0MiB/s (37.7MB/s), 8184KiB/s-10.0MiB/s (8380kB/s-10.5MB/s), io=36.0MiB (37.8MB), run=1001-1001msec 00:13:23.266 WRITE: bw=39.8MiB/s (41.7MB/s), 8476KiB/s-12.0MiB/s (8679kB/s-12.6MB/s), io=39.8MiB (41.8MB), run=1001-1001msec 00:13:23.266 00:13:23.266 Disk stats (read/write): 00:13:23.266 nvme0n1: ios=1596/2048, merge=0/0, ticks=408/391, in_queue=799, util=87.27% 00:13:23.266 nvme0n2: ios=1581/2048, merge=0/0, ticks=386/371, in_queue=757, util=88.01% 00:13:23.266 nvme0n3: ios=2091/2560, merge=0/0, ticks=391/387, in_queue=778, util=88.45% 00:13:23.266 nvme0n4: ios=2247/2560, merge=0/0, ticks=417/361, in_queue=778, util=89.74% 00:13:23.266 13:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:23.266 [global] 00:13:23.266 thread=1 00:13:23.266 invalidate=1 00:13:23.266 rw=randwrite 00:13:23.266 time_based=1 00:13:23.266 runtime=1 00:13:23.266 ioengine=libaio 00:13:23.266 direct=1 00:13:23.266 bs=4096 00:13:23.266 iodepth=1 00:13:23.266 norandommap=0 00:13:23.266 numjobs=1 00:13:23.266 00:13:23.266 verify_dump=1 00:13:23.266 verify_backlog=512 00:13:23.266 verify_state_save=0 00:13:23.266 do_verify=1 00:13:23.266 verify=crc32c-intel 00:13:23.266 [job0] 00:13:23.266 filename=/dev/nvme0n1 00:13:23.266 [job1] 00:13:23.266 filename=/dev/nvme0n2 00:13:23.266 [job2] 00:13:23.266 filename=/dev/nvme0n3 00:13:23.266 [job3] 00:13:23.266 filename=/dev/nvme0n4 00:13:23.266 Could not set queue depth (nvme0n1) 00:13:23.266 Could not set queue depth (nvme0n2) 00:13:23.266 Could not set queue depth (nvme0n3) 00:13:23.266 Could not set queue depth (nvme0n4) 00:13:23.266 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.266 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.266 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.266 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.266 fio-3.35 00:13:23.266 Starting 4 threads 00:13:24.649 00:13:24.649 job0: (groupid=0, jobs=1): err= 0: pid=65211: Fri Sep 27 13:18:26 2024 00:13:24.649 read: IOPS=2872, BW=11.2MiB/s (11.8MB/s)(11.2MiB/1001msec) 00:13:24.649 slat (nsec): min=12016, max=32474, avg=14048.74, stdev=1808.57 00:13:24.649 clat (usec): min=135, max=620, avg=170.18, stdev=21.30 00:13:24.649 lat (usec): min=148, max=634, avg=184.23, stdev=21.41 00:13:24.649 clat percentiles (usec): 00:13:24.649 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:13:24.649 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:13:24.649 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 206], 00:13:24.649 | 99.00th=[ 237], 99.50th=[ 247], 99.90th=[ 482], 99.95th=[ 545], 00:13:24.649 | 99.99th=[ 619] 00:13:24.649 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:24.649 slat (nsec): min=15254, max=73802, avg=21202.68, stdev=3942.64 00:13:24.649 clat (usec): min=90, max=253, avg=128.62, stdev=12.19 00:13:24.649 lat (usec): min=110, max=325, avg=149.82, stdev=13.03 00:13:24.649 clat percentiles (usec): 00:13:24.649 | 1.00th=[ 106], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:13:24.649 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 130], 00:13:24.649 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 151], 00:13:24.649 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 198], 99.95th=[ 204], 00:13:24.649 | 99.99th=[ 253] 00:13:24.649 bw ( KiB/s): min=12263, max=12263, per=25.31%, avg=12263.00, stdev= 0.00, samples=1 00:13:24.649 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:13:24.649 lat (usec) : 100=0.15%, 250=99.65%, 500=0.17%, 750=0.03% 00:13:24.649 cpu : usr=1.80%, sys=8.90%, ctx=5947, majf=0, minf=15 00:13:24.649 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.649 issued rwts: total=2875,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.649 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.649 job1: (groupid=0, jobs=1): err= 0: pid=65212: Fri Sep 27 13:18:26 2024 00:13:24.649 read: IOPS=2857, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec) 00:13:24.649 slat (nsec): min=12210, max=56113, avg=15321.87, stdev=3812.24 00:13:24.649 clat (usec): min=136, max=241, avg=166.53, stdev=11.41 00:13:24.649 lat (usec): min=150, max=255, avg=181.86, stdev=12.57 00:13:24.649 clat percentiles (usec): 00:13:24.649 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:13:24.649 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:13:24.649 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 186], 00:13:24.649 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 221], 99.95th=[ 225], 00:13:24.649 | 99.99th=[ 243] 00:13:24.649 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:24.649 slat (nsec): min=14544, max=74164, avg=23315.19, stdev=7853.17 00:13:24.649 clat (usec): min=95, max=2110, avg=129.28, stdev=53.47 00:13:24.650 lat (usec): min=114, max=2139, avg=152.60, stdev=54.35 00:13:24.650 clat percentiles (usec): 00:13:24.650 | 1.00th=[ 106], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 119], 00:13:24.650 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 129], 00:13:24.650 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 147], 00:13:24.650 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 611], 99.95th=[ 2024], 00:13:24.650 | 99.99th=[ 2114] 00:13:24.650 bw ( KiB/s): min=12263, max=12263, per=25.31%, avg=12263.00, stdev= 0.00, samples=1 00:13:24.650 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:13:24.650 lat (usec) : 100=0.10%, 250=99.73%, 500=0.10%, 750=0.03% 00:13:24.650 lat (msec) : 4=0.03% 00:13:24.650 cpu : usr=2.30%, sys=9.30%, ctx=5932, majf=0, minf=9 00:13:24.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.650 issued rwts: total=2860,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.650 job2: (groupid=0, jobs=1): err= 0: pid=65213: Fri Sep 27 13:18:26 2024 00:13:24.650 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:24.650 slat (nsec): min=12626, max=84959, avg=15867.70, stdev=3794.58 00:13:24.650 clat (usec): min=148, max=5932, avg=185.08, stdev=139.81 00:13:24.650 lat (usec): min=162, max=5957, avg=200.95, stdev=140.24 00:13:24.650 clat percentiles (usec): 00:13:24.650 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:13:24.650 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:13:24.650 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:13:24.650 | 99.00th=[ 229], 99.50th=[ 297], 99.90th=[ 1582], 99.95th=[ 3884], 00:13:24.650 | 99.99th=[ 5932] 00:13:24.650 write: IOPS=2936, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec); 0 zone resets 00:13:24.650 slat (nsec): min=15830, max=73475, avg=23383.45, stdev=6818.19 00:13:24.650 clat (usec): min=107, max=521, avg=138.27, stdev=13.62 00:13:24.650 lat (usec): min=127, max=540, avg=161.65, stdev=16.20 00:13:24.650 clat percentiles (usec): 00:13:24.650 | 1.00th=[ 119], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 129], 00:13:24.650 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:13:24.650 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:13:24.650 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 212], 99.95th=[ 251], 00:13:24.650 | 99.99th=[ 523] 00:13:24.650 bw ( KiB/s): min=12263, max=12263, per=25.31%, avg=12263.00, stdev= 0.00, samples=1 00:13:24.650 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:13:24.650 lat (usec) : 250=99.64%, 500=0.24%, 750=0.07% 00:13:24.650 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02% 00:13:24.650 cpu : usr=1.70%, sys=9.20%, ctx=5499, majf=0, minf=13 00:13:24.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.650 issued rwts: total=2560,2939,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.650 job3: (groupid=0, jobs=1): err= 0: pid=65214: Fri Sep 27 13:18:26 2024 00:13:24.650 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:24.650 slat (nsec): min=12435, max=57919, avg=15530.48, stdev=3521.76 00:13:24.650 clat (usec): min=147, max=1594, avg=179.97, stdev=32.88 00:13:24.650 lat (usec): min=162, max=1607, avg=195.50, stdev=33.16 00:13:24.650 clat percentiles (usec): 00:13:24.650 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:13:24.650 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:13:24.650 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 200], 00:13:24.650 | 99.00th=[ 215], 99.50th=[ 223], 99.90th=[ 449], 99.95th=[ 652], 00:13:24.650 | 99.99th=[ 1598] 00:13:24.650 write: IOPS=3040, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec); 0 zone resets 00:13:24.650 slat (nsec): min=16285, max=68230, avg=21535.74, stdev=3513.45 00:13:24.650 clat (usec): min=105, max=631, avg=139.06, stdev=15.17 00:13:24.650 lat (usec): min=125, max=652, avg=160.60, stdev=15.53 00:13:24.650 clat percentiles (usec): 00:13:24.650 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:13:24.650 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:13:24.650 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:13:24.650 | 99.00th=[ 172], 99.50th=[ 176], 99.90th=[ 255], 99.95th=[ 367], 00:13:24.650 | 99.99th=[ 635] 00:13:24.650 bw ( KiB/s): min=12263, max=12263, per=25.31%, avg=12263.00, stdev= 0.00, samples=1 00:13:24.650 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:13:24.650 lat (usec) : 250=99.80%, 500=0.14%, 750=0.04% 00:13:24.650 lat (msec) : 2=0.02% 00:13:24.650 cpu : usr=2.30%, sys=8.60%, ctx=5605, majf=0, minf=9 00:13:24.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.650 issued rwts: total=2560,3044,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.650 00:13:24.650 Run status group 0 (all jobs): 00:13:24.650 READ: bw=42.4MiB/s (44.4MB/s), 9.99MiB/s-11.2MiB/s (10.5MB/s-11.8MB/s), io=42.4MiB (44.5MB), run=1001-1001msec 00:13:24.650 WRITE: bw=47.3MiB/s (49.6MB/s), 11.5MiB/s-12.0MiB/s (12.0MB/s-12.6MB/s), io=47.4MiB (49.7MB), run=1001-1001msec 00:13:24.650 00:13:24.650 Disk stats (read/write): 00:13:24.650 nvme0n1: ios=2586/2560, merge=0/0, ticks=442/353, in_queue=795, util=87.75% 00:13:24.650 nvme0n2: ios=2552/2560, merge=0/0, ticks=472/373, in_queue=845, util=88.65% 00:13:24.650 nvme0n3: ios=2174/2560, merge=0/0, ticks=405/368, in_queue=773, util=88.44% 00:13:24.650 nvme0n4: ios=2263/2560, merge=0/0, ticks=410/379, in_queue=789, util=89.82% 00:13:24.650 13:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:24.650 [global] 00:13:24.650 thread=1 00:13:24.650 invalidate=1 00:13:24.650 rw=write 00:13:24.650 time_based=1 00:13:24.650 runtime=1 00:13:24.650 ioengine=libaio 00:13:24.650 direct=1 00:13:24.650 bs=4096 00:13:24.650 iodepth=128 00:13:24.650 norandommap=0 00:13:24.650 numjobs=1 00:13:24.650 00:13:24.650 verify_dump=1 00:13:24.650 verify_backlog=512 00:13:24.650 verify_state_save=0 00:13:24.650 do_verify=1 00:13:24.650 verify=crc32c-intel 00:13:24.650 [job0] 00:13:24.650 filename=/dev/nvme0n1 00:13:24.650 [job1] 00:13:24.650 filename=/dev/nvme0n2 00:13:24.650 [job2] 00:13:24.650 filename=/dev/nvme0n3 00:13:24.650 [job3] 00:13:24.650 filename=/dev/nvme0n4 00:13:24.650 Could not set queue depth (nvme0n1) 00:13:24.650 Could not set queue depth (nvme0n2) 00:13:24.650 Could not set queue depth (nvme0n3) 00:13:24.650 Could not set queue depth (nvme0n4) 00:13:24.650 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.650 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.650 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.650 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.650 fio-3.35 00:13:24.650 Starting 4 threads 00:13:26.026 00:13:26.026 job0: (groupid=0, jobs=1): err= 0: pid=65274: Fri Sep 27 13:18:27 2024 00:13:26.026 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:13:26.026 slat (usec): min=5, max=6427, avg=141.82, stdev=732.18 00:13:26.026 clat (usec): min=10932, max=27092, avg=18199.74, stdev=3507.01 00:13:26.026 lat (usec): min=11006, max=27104, avg=18341.56, stdev=3465.04 00:13:26.026 clat percentiles (usec): 00:13:26.026 | 1.00th=[11731], 5.00th=[13435], 10.00th=[15270], 20.00th=[16188], 00:13:26.026 | 30.00th=[16450], 40.00th=[16712], 50.00th=[16909], 60.00th=[17171], 00:13:26.026 | 70.00th=[17433], 80.00th=[20841], 90.00th=[25297], 95.00th=[25560], 00:13:26.026 | 99.00th=[26870], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:13:26.026 | 99.99th=[27132] 00:13:26.026 write: IOPS=3643, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1002msec); 0 zone resets 00:13:26.026 slat (usec): min=10, max=5881, avg=127.10, stdev=602.61 00:13:26.026 clat (usec): min=1011, max=25643, avg=16635.58, stdev=3994.83 00:13:26.026 lat (usec): min=1036, max=25668, avg=16762.68, stdev=3968.39 00:13:26.026 clat percentiles (usec): 00:13:26.026 | 1.00th=[ 6587], 5.00th=[12387], 10.00th=[12780], 20.00th=[13566], 00:13:26.026 | 30.00th=[14091], 40.00th=[14746], 50.00th=[15926], 60.00th=[17433], 00:13:26.026 | 70.00th=[17957], 80.00th=[18482], 90.00th=[23462], 95.00th=[25035], 00:13:26.026 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25560], 99.95th=[25560], 00:13:26.026 | 99.99th=[25560] 00:13:26.026 bw ( KiB/s): min=15112, max=15112, per=23.32%, avg=15112.00, stdev= 0.00, samples=1 00:13:26.026 iops : min= 3778, max= 3778, avg=3778.00, stdev= 0.00, samples=1 00:13:26.026 lat (msec) : 2=0.04%, 4=0.30%, 10=0.58%, 20=79.89%, 50=19.18% 00:13:26.026 cpu : usr=3.00%, sys=10.09%, ctx=229, majf=0, minf=5 00:13:26.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:26.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:26.027 issued rwts: total=3584,3651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:26.027 job1: (groupid=0, jobs=1): err= 0: pid=65275: Fri Sep 27 13:18:27 2024 00:13:26.027 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:13:26.027 slat (usec): min=6, max=8086, avg=193.18, stdev=799.97 00:13:26.027 clat (usec): min=15464, max=39302, avg=23671.39, stdev=4081.16 00:13:26.027 lat (usec): min=15487, max=39327, avg=23864.57, stdev=4155.05 00:13:26.027 clat percentiles (usec): 00:13:26.027 | 1.00th=[16581], 5.00th=[18482], 10.00th=[19006], 20.00th=[19530], 00:13:26.027 | 30.00th=[20055], 40.00th=[22152], 50.00th=[23462], 60.00th=[25297], 00:13:26.027 | 70.00th=[26608], 80.00th=[27395], 90.00th=[27919], 95.00th=[30540], 00:13:26.027 | 99.00th=[34341], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:13:26.027 | 99.99th=[39060] 00:13:26.027 write: IOPS=2378, BW=9515KiB/s (9744kB/s)(9544KiB/1003msec); 0 zone resets 00:13:26.027 slat (usec): min=14, max=5612, avg=244.33, stdev=770.92 00:13:26.027 clat (usec): min=2560, max=56602, avg=32746.97, stdev=10258.94 00:13:26.027 lat (usec): min=3552, max=56627, avg=32991.30, stdev=10310.23 00:13:26.027 clat percentiles (usec): 00:13:26.027 | 1.00th=[10028], 5.00th=[18482], 10.00th=[18744], 20.00th=[19530], 00:13:26.027 | 30.00th=[28967], 40.00th=[32375], 50.00th=[33424], 60.00th=[33817], 00:13:26.027 | 70.00th=[36439], 80.00th=[41157], 90.00th=[47449], 95.00th=[51119], 00:13:26.027 | 99.00th=[53740], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:13:26.027 | 99.99th=[56361] 00:13:26.027 bw ( KiB/s): min= 8456, max= 9616, per=13.95%, avg=9036.00, stdev=820.24, samples=2 00:13:26.027 iops : min= 2114, max= 2404, avg=2259.00, stdev=205.06, samples=2 00:13:26.027 lat (msec) : 4=0.20%, 10=0.32%, 20=23.64%, 50=72.15%, 100=3.70% 00:13:26.027 cpu : usr=2.59%, sys=7.98%, ctx=336, majf=0, minf=6 00:13:26.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:26.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:26.027 issued rwts: total=2048,2386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:26.027 job2: (groupid=0, jobs=1): err= 0: pid=65276: Fri Sep 27 13:18:27 2024 00:13:26.027 read: IOPS=4754, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1003msec) 00:13:26.027 slat (usec): min=8, max=3922, avg=99.40, stdev=470.68 00:13:26.027 clat (usec): min=328, max=14652, avg=13081.15, stdev=1207.84 00:13:26.027 lat (usec): min=3332, max=14681, avg=13180.55, stdev=1114.52 00:13:26.027 clat percentiles (usec): 00:13:26.027 | 1.00th=[ 7242], 5.00th=[11469], 10.00th=[12387], 20.00th=[12649], 00:13:26.027 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:13:26.027 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14091], 00:13:26.027 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14484], 99.95th=[14484], 00:13:26.027 | 99.99th=[14615] 00:13:26.027 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:13:26.027 slat (usec): min=10, max=2991, avg=95.57, stdev=411.58 00:13:26.027 clat (usec): min=9185, max=13948, avg=12551.79, stdev=607.49 00:13:26.027 lat (usec): min=10240, max=13989, avg=12647.36, stdev=447.31 00:13:26.027 clat percentiles (usec): 00:13:26.027 | 1.00th=[10028], 5.00th=[11731], 10.00th=[12125], 20.00th=[12256], 00:13:26.027 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:13:26.027 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13173], 95.00th=[13435], 00:13:26.027 | 99.00th=[13698], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:13:26.027 | 99.99th=[13960] 00:13:26.027 bw ( KiB/s): min=20480, max=20480, per=31.61%, avg=20480.00, stdev= 0.00, samples=2 00:13:26.027 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:13:26.027 lat (usec) : 500=0.01% 00:13:26.027 lat (msec) : 4=0.32%, 10=0.86%, 20=98.81% 00:13:26.027 cpu : usr=4.19%, sys=13.97%, ctx=311, majf=0, minf=1 00:13:26.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:26.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:26.027 issued rwts: total=4769,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:26.027 job3: (groupid=0, jobs=1): err= 0: pid=65277: Fri Sep 27 13:18:27 2024 00:13:26.027 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:13:26.027 slat (usec): min=7, max=8326, avg=100.74, stdev=487.93 00:13:26.027 clat (usec): min=9591, max=20193, avg=13514.67, stdev=1195.37 00:13:26.027 lat (usec): min=12085, max=20219, avg=13615.41, stdev=1101.12 00:13:26.027 clat percentiles (usec): 00:13:26.027 | 1.00th=[10421], 5.00th=[12387], 10.00th=[12649], 20.00th=[12780], 00:13:26.027 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13698], 00:13:26.027 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14091], 95.00th=[14222], 00:13:26.027 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20317], 99.95th=[20317], 00:13:26.027 | 99.99th=[20317] 00:13:26.027 write: IOPS=5078, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1002msec); 0 zone resets 00:13:26.027 slat (usec): min=10, max=4279, avg=98.06, stdev=423.67 00:13:26.027 clat (usec): min=252, max=15139, avg=12615.89, stdev=1172.59 00:13:26.027 lat (usec): min=2483, max=15153, avg=12713.95, stdev=1095.19 00:13:26.027 clat percentiles (usec): 00:13:26.027 | 1.00th=[ 6390], 5.00th=[11863], 10.00th=[12125], 20.00th=[12387], 00:13:26.027 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:13:26.027 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13566], 00:13:26.027 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15139], 99.95th=[15139], 00:13:26.027 | 99.99th=[15139] 00:13:26.027 bw ( KiB/s): min=19208, max=20521, per=30.66%, avg=19864.50, stdev=928.43, samples=2 00:13:26.027 iops : min= 4802, max= 5130, avg=4966.00, stdev=231.93, samples=2 00:13:26.027 lat (usec) : 500=0.01% 00:13:26.027 lat (msec) : 4=0.33%, 10=0.92%, 20=98.42%, 50=0.32% 00:13:26.027 cpu : usr=4.80%, sys=13.19%, ctx=308, majf=0, minf=1 00:13:26.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:26.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:26.027 issued rwts: total=4608,5089,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:26.027 00:13:26.027 Run status group 0 (all jobs): 00:13:26.027 READ: bw=58.5MiB/s (61.3MB/s), 8167KiB/s-18.6MiB/s (8364kB/s-19.5MB/s), io=58.6MiB (61.5MB), run=1002-1003msec 00:13:26.027 WRITE: bw=63.3MiB/s (66.3MB/s), 9515KiB/s-19.9MiB/s (9744kB/s-20.9MB/s), io=63.5MiB (66.5MB), run=1002-1003msec 00:13:26.027 00:13:26.027 Disk stats (read/write): 00:13:26.027 nvme0n1: ios=3121/3136, merge=0/0, ticks=13797/11454, in_queue=25251, util=88.37% 00:13:26.027 nvme0n2: ios=1834/2048, merge=0/0, ticks=13791/21405, in_queue=35196, util=89.98% 00:13:26.027 nvme0n3: ios=4096/4448, merge=0/0, ticks=12276/11869, in_queue=24145, util=89.21% 00:13:26.027 nvme0n4: ios=4096/4256, merge=0/0, ticks=12382/11653, in_queue=24035, util=89.26% 00:13:26.027 13:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:26.027 [global] 00:13:26.027 thread=1 00:13:26.027 invalidate=1 00:13:26.027 rw=randwrite 00:13:26.027 time_based=1 00:13:26.027 runtime=1 00:13:26.027 ioengine=libaio 00:13:26.027 direct=1 00:13:26.027 bs=4096 00:13:26.027 iodepth=128 00:13:26.027 norandommap=0 00:13:26.027 numjobs=1 00:13:26.027 00:13:26.027 verify_dump=1 00:13:26.027 verify_backlog=512 00:13:26.027 verify_state_save=0 00:13:26.027 do_verify=1 00:13:26.027 verify=crc32c-intel 00:13:26.027 [job0] 00:13:26.027 filename=/dev/nvme0n1 00:13:26.027 [job1] 00:13:26.027 filename=/dev/nvme0n2 00:13:26.027 [job2] 00:13:26.027 filename=/dev/nvme0n3 00:13:26.027 [job3] 00:13:26.027 filename=/dev/nvme0n4 00:13:26.027 Could not set queue depth (nvme0n1) 00:13:26.027 Could not set queue depth (nvme0n2) 00:13:26.027 Could not set queue depth (nvme0n3) 00:13:26.027 Could not set queue depth (nvme0n4) 00:13:26.027 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:26.027 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:26.027 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:26.027 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:26.027 fio-3.35 00:13:26.027 Starting 4 threads 00:13:27.402 00:13:27.402 job0: (groupid=0, jobs=1): err= 0: pid=65330: Fri Sep 27 13:18:28 2024 00:13:27.402 read: IOPS=4977, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1002msec) 00:13:27.402 slat (usec): min=7, max=3405, avg=95.40, stdev=372.75 00:13:27.402 clat (usec): min=573, max=16161, avg=12717.37, stdev=1274.38 00:13:27.402 lat (usec): min=2220, max=16199, avg=12812.77, stdev=1306.28 00:13:27.402 clat percentiles (usec): 00:13:27.402 | 1.00th=[ 6456], 5.00th=[11338], 10.00th=[11994], 20.00th=[12387], 00:13:27.402 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:13:27.402 | 70.00th=[12911], 80.00th=[13042], 90.00th=[14091], 95.00th=[14484], 00:13:27.402 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15533], 99.95th=[15926], 00:13:27.402 | 99.99th=[16188] 00:13:27.402 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:13:27.402 slat (usec): min=10, max=4454, avg=94.52, stdev=448.95 00:13:27.402 clat (usec): min=9952, max=17207, avg=12337.94, stdev=842.60 00:13:27.402 lat (usec): min=9976, max=17250, avg=12432.46, stdev=941.06 00:13:27.402 clat percentiles (usec): 00:13:27.402 | 1.00th=[10290], 5.00th=[11338], 10.00th=[11600], 20.00th=[11863], 00:13:27.402 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:13:27.402 | 70.00th=[12518], 80.00th=[12649], 90.00th=[13042], 95.00th=[14222], 00:13:27.402 | 99.00th=[15139], 99.50th=[15401], 99.90th=[15926], 99.95th=[16188], 00:13:27.402 | 99.99th=[17171] 00:13:27.402 bw ( KiB/s): min=20480, max=20521, per=26.42%, avg=20500.50, stdev=28.99, samples=2 00:13:27.402 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:13:27.402 lat (usec) : 750=0.01% 00:13:27.402 lat (msec) : 4=0.20%, 10=0.73%, 20=99.06% 00:13:27.402 cpu : usr=4.30%, sys=14.99%, ctx=365, majf=0, minf=12 00:13:27.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:27.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:27.402 issued rwts: total=4987,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:27.402 job1: (groupid=0, jobs=1): err= 0: pid=65331: Fri Sep 27 13:18:28 2024 00:13:27.402 read: IOPS=4971, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1003msec) 00:13:27.402 slat (usec): min=8, max=3863, avg=95.81, stdev=378.62 00:13:27.402 clat (usec): min=639, max=17065, avg=12636.62, stdev=1306.14 00:13:27.402 lat (usec): min=2381, max=17102, avg=12732.43, stdev=1338.79 00:13:27.402 clat percentiles (usec): 00:13:27.402 | 1.00th=[ 6652], 5.00th=[10945], 10.00th=[11994], 20.00th=[12256], 00:13:27.402 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:13:27.402 | 70.00th=[12911], 80.00th=[13042], 90.00th=[14091], 95.00th=[14484], 00:13:27.402 | 99.00th=[15401], 99.50th=[16057], 99.90th=[16712], 99.95th=[16712], 00:13:27.402 | 99.99th=[17171] 00:13:27.402 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:13:27.402 slat (usec): min=8, max=3738, avg=94.43, stdev=441.53 00:13:27.402 clat (usec): min=9334, max=16771, avg=12452.19, stdev=950.94 00:13:27.402 lat (usec): min=9365, max=16787, avg=12546.62, stdev=1035.26 00:13:27.402 clat percentiles (usec): 00:13:27.402 | 1.00th=[10028], 5.00th=[11338], 10.00th=[11600], 20.00th=[11863], 00:13:27.402 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:13:27.402 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13435], 95.00th=[14484], 00:13:27.402 | 99.00th=[15926], 99.50th=[16057], 99.90th=[16581], 99.95th=[16712], 00:13:27.402 | 99.99th=[16712] 00:13:27.402 bw ( KiB/s): min=20480, max=20480, per=26.39%, avg=20480.00, stdev= 0.00, samples=2 00:13:27.403 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:13:27.403 lat (usec) : 750=0.01% 00:13:27.403 lat (msec) : 4=0.20%, 10=1.31%, 20=98.49% 00:13:27.403 cpu : usr=3.89%, sys=14.67%, ctx=416, majf=0, minf=11 00:13:27.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:27.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:27.403 issued rwts: total=4986,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.403 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:27.403 job2: (groupid=0, jobs=1): err= 0: pid=65332: Fri Sep 27 13:18:28 2024 00:13:27.403 read: IOPS=4120, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1002msec) 00:13:27.403 slat (usec): min=7, max=5439, avg=112.11, stdev=502.77 00:13:27.403 clat (usec): min=705, max=17872, avg=14783.51, stdev=1226.94 00:13:27.403 lat (usec): min=3272, max=17880, avg=14895.62, stdev=1122.39 00:13:27.403 clat percentiles (usec): 00:13:27.403 | 1.00th=[11338], 5.00th=[14091], 10.00th=[14353], 20.00th=[14615], 00:13:27.403 | 30.00th=[14746], 40.00th=[14877], 50.00th=[14877], 60.00th=[15008], 00:13:27.403 | 70.00th=[15139], 80.00th=[15139], 90.00th=[15401], 95.00th=[15664], 00:13:27.403 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17433], 99.95th=[17957], 00:13:27.403 | 99.99th=[17957] 00:13:27.403 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:13:27.403 slat (usec): min=10, max=4767, avg=108.69, stdev=477.41 00:13:27.403 clat (usec): min=6531, max=17147, avg=14184.34, stdev=933.51 00:13:27.403 lat (usec): min=6552, max=17181, avg=14293.02, stdev=831.50 00:13:27.403 clat percentiles (usec): 00:13:27.403 | 1.00th=[10290], 5.00th=[13435], 10.00th=[13829], 20.00th=[14091], 00:13:27.403 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14222], 60.00th=[14353], 00:13:27.403 | 70.00th=[14353], 80.00th=[14484], 90.00th=[14746], 95.00th=[14877], 00:13:27.403 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16909], 99.95th=[16909], 00:13:27.403 | 99.99th=[17171] 00:13:27.403 bw ( KiB/s): min=17466, max=18672, per=23.29%, avg=18069.00, stdev=852.77, samples=2 00:13:27.403 iops : min= 4366, max= 4668, avg=4517.00, stdev=213.55, samples=2 00:13:27.403 lat (usec) : 750=0.01% 00:13:27.403 lat (msec) : 4=0.34%, 10=0.45%, 20=99.20% 00:13:27.403 cpu : usr=3.60%, sys=13.29%, ctx=342, majf=0, minf=13 00:13:27.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:27.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:27.403 issued rwts: total=4129,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.403 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:27.403 job3: (groupid=0, jobs=1): err= 0: pid=65333: Fri Sep 27 13:18:28 2024 00:13:27.403 read: IOPS=4180, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1002msec) 00:13:27.403 slat (usec): min=4, max=6508, avg=114.28, stdev=574.92 00:13:27.403 clat (usec): min=811, max=21204, avg=14602.62, stdev=1828.94 00:13:27.403 lat (usec): min=2561, max=25153, avg=14716.90, stdev=1859.30 00:13:27.403 clat percentiles (usec): 00:13:27.403 | 1.00th=[ 8160], 5.00th=[11731], 10.00th=[13042], 20.00th=[14091], 00:13:27.403 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14746], 60.00th=[14877], 00:13:27.403 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15926], 95.00th=[17171], 00:13:27.403 | 99.00th=[19792], 99.50th=[20317], 99.90th=[20841], 99.95th=[20841], 00:13:27.403 | 99.99th=[21103] 00:13:27.403 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:13:27.403 slat (usec): min=10, max=5923, avg=105.26, stdev=572.34 00:13:27.403 clat (usec): min=6149, max=21160, avg=14194.23, stdev=1460.99 00:13:27.403 lat (usec): min=6181, max=21422, avg=14299.49, stdev=1547.84 00:13:27.403 clat percentiles (usec): 00:13:27.403 | 1.00th=[10290], 5.00th=[12256], 10.00th=[13042], 20.00th=[13566], 00:13:27.403 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14222], 00:13:27.403 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15401], 95.00th=[17171], 00:13:27.403 | 99.00th=[19530], 99.50th=[20317], 99.90th=[20579], 99.95th=[21103], 00:13:27.403 | 99.99th=[21103] 00:13:27.403 bw ( KiB/s): min=18256, max=18364, per=23.60%, avg=18310.00, stdev=76.37, samples=2 00:13:27.403 iops : min= 4564, max= 4591, avg=4577.50, stdev=19.09, samples=2 00:13:27.403 lat (usec) : 1000=0.01% 00:13:27.403 lat (msec) : 4=0.18%, 10=1.18%, 20=97.95%, 50=0.67% 00:13:27.403 cpu : usr=3.90%, sys=13.29%, ctx=348, majf=0, minf=13 00:13:27.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:27.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:27.403 issued rwts: total=4189,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.403 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:27.403 00:13:27.403 Run status group 0 (all jobs): 00:13:27.403 READ: bw=71.2MiB/s (74.7MB/s), 16.1MiB/s-19.4MiB/s (16.9MB/s-20.4MB/s), io=71.4MiB (74.9MB), run=1002-1003msec 00:13:27.403 WRITE: bw=75.8MiB/s (79.5MB/s), 18.0MiB/s-20.0MiB/s (18.8MB/s-20.9MB/s), io=76.0MiB (79.7MB), run=1002-1003msec 00:13:27.403 00:13:27.403 Disk stats (read/write): 00:13:27.403 nvme0n1: ios=4146/4480, merge=0/0, ticks=16393/15521, in_queue=31914, util=86.57% 00:13:27.403 nvme0n2: ios=4109/4522, merge=0/0, ticks=16490/15817, in_queue=32307, util=87.16% 00:13:27.403 nvme0n3: ios=3584/3810, merge=0/0, ticks=12152/11842, in_queue=23994, util=88.82% 00:13:27.403 nvme0n4: ios=3584/3877, merge=0/0, ticks=25255/23692, in_queue=48947, util=89.59% 00:13:27.403 13:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:27.403 13:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=65352 00:13:27.403 13:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:27.403 13:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:27.403 [global] 00:13:27.403 thread=1 00:13:27.403 invalidate=1 00:13:27.403 rw=read 00:13:27.403 time_based=1 00:13:27.403 runtime=10 00:13:27.403 ioengine=libaio 00:13:27.403 direct=1 00:13:27.403 bs=4096 00:13:27.403 iodepth=1 00:13:27.403 norandommap=1 00:13:27.403 numjobs=1 00:13:27.403 00:13:27.403 [job0] 00:13:27.403 filename=/dev/nvme0n1 00:13:27.403 [job1] 00:13:27.403 filename=/dev/nvme0n2 00:13:27.403 [job2] 00:13:27.403 filename=/dev/nvme0n3 00:13:27.403 [job3] 00:13:27.403 filename=/dev/nvme0n4 00:13:27.403 Could not set queue depth (nvme0n1) 00:13:27.403 Could not set queue depth (nvme0n2) 00:13:27.403 Could not set queue depth (nvme0n3) 00:13:27.403 Could not set queue depth (nvme0n4) 00:13:27.403 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:27.403 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:27.403 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:27.403 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:27.403 fio-3.35 00:13:27.403 Starting 4 threads 00:13:30.710 13:18:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:30.710 fio: pid=65395, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:30.710 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=64073728, buflen=4096 00:13:30.710 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:30.710 fio: pid=65394, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:30.710 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=37171200, buflen=4096 00:13:30.710 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:30.710 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:30.967 fio: pid=65392, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:30.967 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=41242624, buflen=4096 00:13:31.225 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:31.225 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:31.225 fio: pid=65393, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:31.225 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=47841280, buflen=4096 00:13:31.483 00:13:31.483 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65392: Fri Sep 27 13:18:33 2024 00:13:31.483 read: IOPS=2825, BW=11.0MiB/s (11.6MB/s)(39.3MiB/3564msec) 00:13:31.483 slat (usec): min=8, max=13758, avg=25.60, stdev=221.40 00:13:31.483 clat (usec): min=133, max=2901, avg=326.09, stdev=89.86 00:13:31.483 lat (usec): min=147, max=14095, avg=351.69, stdev=239.50 00:13:31.483 clat percentiles (usec): 00:13:31.483 | 1.00th=[ 163], 5.00th=[ 208], 10.00th=[ 219], 20.00th=[ 262], 00:13:31.483 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:13:31.483 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 429], 00:13:31.483 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 1270], 99.95th=[ 1614], 00:13:31.483 | 99.99th=[ 2180] 00:13:31.483 bw ( KiB/s): min=10088, max=12240, per=22.14%, avg=10729.33, stdev=797.39, samples=6 00:13:31.484 iops : min= 2522, max= 3060, avg=2682.67, stdev=199.19, samples=6 00:13:31.484 lat (usec) : 250=18.65%, 500=77.20%, 750=3.89%, 1000=0.12% 00:13:31.484 lat (msec) : 2=0.11%, 4=0.02% 00:13:31.484 cpu : usr=1.21%, sys=5.25%, ctx=10082, majf=0, minf=1 00:13:31.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.484 issued rwts: total=10070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.484 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65393: Fri Sep 27 13:18:33 2024 00:13:31.484 read: IOPS=3045, BW=11.9MiB/s (12.5MB/s)(45.6MiB/3835msec) 00:13:31.484 slat (usec): min=8, max=11809, avg=18.66, stdev=203.55 00:13:31.484 clat (usec): min=125, max=7845, avg=308.38, stdev=140.98 00:13:31.484 lat (usec): min=139, max=12039, avg=327.04, stdev=258.22 00:13:31.484 clat percentiles (usec): 00:13:31.484 | 1.00th=[ 137], 5.00th=[ 151], 10.00th=[ 186], 20.00th=[ 223], 00:13:31.484 | 30.00th=[ 277], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 343], 00:13:31.484 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 379], 00:13:31.484 | 99.00th=[ 486], 99.50th=[ 537], 99.90th=[ 1696], 99.95th=[ 3228], 00:13:31.484 | 99.99th=[ 5800] 00:13:31.484 bw ( KiB/s): min=10912, max=13709, per=23.75%, avg=11511.57, stdev=998.21, samples=7 00:13:31.484 iops : min= 2728, max= 3427, avg=2877.86, stdev=249.46, samples=7 00:13:31.484 lat (usec) : 250=27.42%, 500=71.71%, 750=0.63%, 1000=0.09% 00:13:31.484 lat (msec) : 2=0.06%, 4=0.05%, 10=0.03% 00:13:31.484 cpu : usr=0.81%, sys=4.20%, ctx=11693, majf=0, minf=1 00:13:31.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.484 issued rwts: total=11681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.484 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65394: Fri Sep 27 13:18:33 2024 00:13:31.484 read: IOPS=2802, BW=10.9MiB/s (11.5MB/s)(35.4MiB/3239msec) 00:13:31.484 slat (usec): min=8, max=9425, avg=20.01, stdev=127.20 00:13:31.484 clat (usec): min=161, max=3500, avg=335.00, stdev=67.50 00:13:31.484 lat (usec): min=179, max=9778, avg=355.01, stdev=145.05 00:13:31.484 clat percentiles (usec): 00:13:31.484 | 1.00th=[ 227], 5.00th=[ 245], 10.00th=[ 269], 20.00th=[ 314], 00:13:31.484 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 347], 00:13:31.484 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 379], 00:13:31.484 | 99.00th=[ 486], 99.50th=[ 529], 99.90th=[ 963], 99.95th=[ 1745], 00:13:31.484 | 99.99th=[ 3490] 00:13:31.484 bw ( KiB/s): min=10912, max=12224, per=23.20%, avg=11244.00, stdev=494.18, samples=6 00:13:31.484 iops : min= 2728, max= 3056, avg=2811.00, stdev=123.55, samples=6 00:13:31.484 lat (usec) : 250=6.53%, 500=92.61%, 750=0.69%, 1000=0.06% 00:13:31.484 lat (msec) : 2=0.08%, 4=0.02% 00:13:31.484 cpu : usr=1.20%, sys=4.57%, ctx=9079, majf=0, minf=2 00:13:31.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.484 issued rwts: total=9076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.484 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65395: Fri Sep 27 13:18:33 2024 00:13:31.484 read: IOPS=5246, BW=20.5MiB/s (21.5MB/s)(61.1MiB/2982msec) 00:13:31.484 slat (nsec): min=12681, max=54490, avg=16032.98, stdev=3896.21 00:13:31.484 clat (usec): min=142, max=2146, avg=173.08, stdev=25.66 00:13:31.484 lat (usec): min=156, max=2160, avg=189.11, stdev=26.71 00:13:31.484 clat percentiles (usec): 00:13:31.484 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:13:31.484 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:13:31.484 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 202], 00:13:31.484 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 289], 99.95th=[ 338], 00:13:31.484 | 99.99th=[ 725] 00:13:31.484 bw ( KiB/s): min=18792, max=21848, per=43.46%, avg=21062.40, stdev=1280.48, samples=5 00:13:31.484 iops : min= 4698, max= 5462, avg=5265.60, stdev=320.12, samples=5 00:13:31.484 lat (usec) : 250=98.87%, 500=1.09%, 750=0.02% 00:13:31.484 lat (msec) : 4=0.01% 00:13:31.484 cpu : usr=1.78%, sys=6.98%, ctx=15645, majf=0, minf=2 00:13:31.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.484 issued rwts: total=15644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.484 00:13:31.484 Run status group 0 (all jobs): 00:13:31.484 READ: bw=47.3MiB/s (49.6MB/s), 10.9MiB/s-20.5MiB/s (11.5MB/s-21.5MB/s), io=182MiB (190MB), run=2982-3835msec 00:13:31.484 00:13:31.484 Disk stats (read/write): 00:13:31.484 nvme0n1: ios=9246/0, merge=0/0, ticks=3138/0, in_queue=3138, util=95.25% 00:13:31.484 nvme0n2: ios=10460/0, merge=0/0, ticks=3186/0, in_queue=3186, util=95.32% 00:13:31.484 nvme0n3: ios=8715/0, merge=0/0, ticks=2892/0, in_queue=2892, util=96.37% 00:13:31.484 nvme0n4: ios=15089/0, merge=0/0, ticks=2667/0, in_queue=2667, util=96.83% 00:13:31.484 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:31.484 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:31.742 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:31.742 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:31.999 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:31.999 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:32.256 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:32.256 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:32.513 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:32.513 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 65352 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.769 nvmf hotplug test: fio failed as expected 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:32.769 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:33.027 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:33.027 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:33.027 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:33.027 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:33.027 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:33.027 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # nvmfcleanup 00:13:33.027 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:13:33.027 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:33.027 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:13:33.027 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:33.027 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:33.027 rmmod nvme_tcp 00:13:33.283 rmmod nvme_fabrics 00:13:33.283 rmmod nvme_keyring 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # '[' -n 64965 ']' 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@333 -- # killprocess 64965 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 64965 ']' 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 64965 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64965 00:13:33.283 killing process with pid 64965 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64965' 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 64965 00:13:33.283 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 64965 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # nvmf_fini 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@264 -- # local dev 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:13:33.283 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # continue 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # continue 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@284 -- # iptr 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@538 -- # iptables-save 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@538 -- # iptables-restore 00:13:33.540 00:13:33.540 real 0m20.200s 00:13:33.540 user 1m17.121s 00:13:33.540 sys 0m10.070s 00:13:33.540 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.541 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.541 ************************************ 00:13:33.541 END TEST nvmf_fio_target 00:13:33.541 ************************************ 00:13:33.541 13:18:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:33.541 13:18:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:33.541 13:18:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.541 13:18:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:33.541 ************************************ 00:13:33.541 START TEST nvmf_bdevio 00:13:33.541 ************************************ 00:13:33.541 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:33.541 * Looking for test storage... 00:13:33.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:33.541 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.800 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:33.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.801 --rc genhtml_branch_coverage=1 00:13:33.801 --rc genhtml_function_coverage=1 00:13:33.801 --rc genhtml_legend=1 00:13:33.801 --rc geninfo_all_blocks=1 00:13:33.801 --rc geninfo_unexecuted_blocks=1 00:13:33.801 00:13:33.801 ' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:33.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.801 --rc genhtml_branch_coverage=1 00:13:33.801 --rc genhtml_function_coverage=1 00:13:33.801 --rc genhtml_legend=1 00:13:33.801 --rc geninfo_all_blocks=1 00:13:33.801 --rc geninfo_unexecuted_blocks=1 00:13:33.801 00:13:33.801 ' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:33.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.801 --rc genhtml_branch_coverage=1 00:13:33.801 --rc genhtml_function_coverage=1 00:13:33.801 --rc genhtml_legend=1 00:13:33.801 --rc geninfo_all_blocks=1 00:13:33.801 --rc geninfo_unexecuted_blocks=1 00:13:33.801 00:13:33.801 ' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:33.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.801 --rc genhtml_branch_coverage=1 00:13:33.801 --rc genhtml_function_coverage=1 00:13:33.801 --rc genhtml_legend=1 00:13:33.801 --rc geninfo_all_blocks=1 00:13:33.801 --rc geninfo_unexecuted_blocks=1 00:13:33.801 00:13:33.801 ' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:33.801 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # prepare_net_devs 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # local -g is_hw=no 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # remove_target_ns 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@276 -- # nvmf_veth_init 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@233 -- # create_target_ns 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@234 -- # create_main_bridge 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@114 -- # delete_main_bridge 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@130 -- # return 0 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@160 -- # set_up initiator0 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@160 -- # set_up target0 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set target0 up 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@161 -- # set_up target0_br 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_ns target0 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:13:33.801 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:13:33.802 10.0.0.1 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:13:33.802 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:33.802 10.0.0.2 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@75 -- # set_up initiator0 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:13:34.060 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@138 -- # set_up target0_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@160 -- # set_up initiator1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@160 -- # set_up target1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set target1 up 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@161 -- # set_up target1_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_ns target1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772163 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:13:34.061 10.0.0.3 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772164 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:13:34.061 10.0.0.4 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@75 -- # set_up initiator1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@138 -- # set_up target1_br 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 2 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo initiator0 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=initiator0 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:34.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:13:34.061 00:13:34.061 --- 10.0.0.1 ping statistics --- 00:13:34.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.061 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo target0 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=target0 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:34.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:13:34.061 00:13:34.061 --- 10.0.0.2 ping statistics --- 00:13:34.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.061 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo initiator1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=initiator1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:13:34.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:34.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:13:34.061 00:13:34.061 --- 10.0.0.3 ping statistics --- 00:13:34.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.061 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:34.061 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:34.062 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:34.062 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:34.062 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:34.062 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:34.062 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:34.062 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target1 00:13:34.062 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:34.062 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:34.062 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo target1 00:13:34.062 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=target1 00:13:34.062 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:34.062 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:13:34.319 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:34.319 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 00:13:34.319 00:13:34.319 --- 10.0.0.4 ping statistics --- 00:13:34.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.319 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # return 0 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo initiator0 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=initiator0 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo initiator1 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=initiator1 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:34.319 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo target0 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=target0 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target1 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo target1 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=target1 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:34.320 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:34.320 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@324 -- # nvmfpid=65711 00:13:34.320 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # waitforlisten 65711 00:13:34.320 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:34.320 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 65711 ']' 00:13:34.320 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.320 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.320 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.320 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.320 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:34.320 [2024-09-27 13:18:36.051979] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:13:34.320 [2024-09-27 13:18:36.052062] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.578 [2024-09-27 13:18:36.188274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.578 [2024-09-27 13:18:36.251325] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.578 [2024-09-27 13:18:36.251377] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.578 [2024-09-27 13:18:36.251389] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.578 [2024-09-27 13:18:36.251398] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.578 [2024-09-27 13:18:36.251405] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.578 [2024-09-27 13:18:36.251566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:13:34.578 [2024-09-27 13:18:36.254714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:13:34.578 [2024-09-27 13:18:36.254889] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:13:34.578 [2024-09-27 13:18:36.254900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.578 [2024-09-27 13:18:36.284285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:35.511 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:35.511 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:13:35.511 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:13:35.511 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:35.511 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:35.512 [2024-09-27 13:18:37.127483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:35.512 Malloc0 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:35.512 [2024-09-27 13:18:37.169920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # config=() 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # local subsystem config 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # for subsystem in "${@:-1}" 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # config+=("$(cat <<-EOF 00:13:35.512 { 00:13:35.512 "params": { 00:13:35.512 "name": "Nvme$subsystem", 00:13:35.512 "trtype": "$TEST_TRANSPORT", 00:13:35.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:35.512 "adrfam": "ipv4", 00:13:35.512 "trsvcid": "$NVMF_PORT", 00:13:35.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:35.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:35.512 "hdgst": ${hdgst:-false}, 00:13:35.512 "ddgst": ${ddgst:-false} 00:13:35.512 }, 00:13:35.512 "method": "bdev_nvme_attach_controller" 00:13:35.512 } 00:13:35.512 EOF 00:13:35.512 )") 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # cat 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # jq . 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@393 -- # IFS=, 00:13:35.512 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # printf '%s\n' '{ 00:13:35.512 "params": { 00:13:35.512 "name": "Nvme1", 00:13:35.512 "trtype": "tcp", 00:13:35.512 "traddr": "10.0.0.2", 00:13:35.512 "adrfam": "ipv4", 00:13:35.512 "trsvcid": "4420", 00:13:35.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:35.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:35.512 "hdgst": false, 00:13:35.512 "ddgst": false 00:13:35.512 }, 00:13:35.512 "method": "bdev_nvme_attach_controller" 00:13:35.512 }' 00:13:35.512 [2024-09-27 13:18:37.225116] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:13:35.512 [2024-09-27 13:18:37.225210] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65753 ] 00:13:35.770 [2024-09-27 13:18:37.363670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:35.770 [2024-09-27 13:18:37.436399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.770 [2024-09-27 13:18:37.436487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.770 [2024-09-27 13:18:37.436497] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.770 [2024-09-27 13:18:37.478093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:35.770 I/O targets: 00:13:35.770 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:35.770 00:13:35.770 00:13:35.770 CUnit - A unit testing framework for C - Version 2.1-3 00:13:35.770 http://cunit.sourceforge.net/ 00:13:35.770 00:13:35.770 00:13:35.770 Suite: bdevio tests on: Nvme1n1 00:13:35.770 Test: blockdev write read block ...passed 00:13:35.770 Test: blockdev write zeroes read block ...passed 00:13:35.770 Test: blockdev write zeroes read no split ...passed 00:13:35.770 Test: blockdev write zeroes read split ...passed 00:13:35.770 Test: blockdev write zeroes read split partial ...passed 00:13:35.770 Test: blockdev reset ...[2024-09-27 13:18:37.612185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:35.770 [2024-09-27 13:18:37.612501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7861e0 (9): Bad file descriptor 00:13:36.029 [2024-09-27 13:18:37.630275] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:36.029 passed 00:13:36.029 Test: blockdev write read 8 blocks ...passed 00:13:36.029 Test: blockdev write read size > 128k ...passed 00:13:36.029 Test: blockdev write read invalid size ...passed 00:13:36.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:36.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:36.029 Test: blockdev write read max offset ...passed 00:13:36.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:36.029 Test: blockdev writev readv 8 blocks ...passed 00:13:36.029 Test: blockdev writev readv 30 x 1block ...passed 00:13:36.029 Test: blockdev writev readv block ...passed 00:13:36.029 Test: blockdev writev readv size > 128k ...passed 00:13:36.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:36.029 Test: blockdev comparev and writev ...[2024-09-27 13:18:37.637763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.029 [2024-09-27 13:18:37.637809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:36.029 [2024-09-27 13:18:37.637831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.029 [2024-09-27 13:18:37.637843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:36.029 [2024-09-27 13:18:37.638302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.029 [2024-09-27 13:18:37.638333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:36.029 [2024-09-27 13:18:37.638352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.029 [2024-09-27 13:18:37.638363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:36.029 [2024-09-27 13:18:37.638829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.029 [2024-09-27 13:18:37.638860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:36.029 [2024-09-27 13:18:37.638879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.029 [2024-09-27 13:18:37.638891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:36.029 [2024-09-27 13:18:37.639335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.029 [2024-09-27 13:18:37.639364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:36.029 [2024-09-27 13:18:37.639383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:36.029 [2024-09-27 13:18:37.639394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:36.029 passed 00:13:36.029 Test: blockdev nvme passthru rw ...passed 00:13:36.029 Test: blockdev nvme passthru vendor specific ...[2024-09-27 13:18:37.640283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.029 [2024-09-27 13:18:37.640315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:36.029 [2024-09-27 13:18:37.640428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.029 [2024-09-27 13:18:37.640451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:36.029 [2024-09-27 13:18:37.640551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.029 [2024-09-27 13:18:37.640574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:36.029 passed 00:13:36.029 Test: blockdev nvme admin passthru ...[2024-09-27 13:18:37.640673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:36.029 [2024-09-27 13:18:37.640716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:36.029 passed 00:13:36.029 Test: blockdev copy ...passed 00:13:36.029 00:13:36.029 Run Summary: Type Total Ran Passed Failed Inactive 00:13:36.029 suites 1 1 n/a 0 0 00:13:36.029 tests 23 23 23 0 0 00:13:36.029 asserts 152 152 152 0 n/a 00:13:36.029 00:13:36.029 Elapsed time = 0.142 seconds 00:13:36.029 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.029 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.029 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:36.029 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.029 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:36.029 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:36.029 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # nvmfcleanup 00:13:36.029 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:13:36.029 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:36.029 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:13:36.029 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:36.029 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:36.029 rmmod nvme_tcp 00:13:36.287 rmmod nvme_fabrics 00:13:36.287 rmmod nvme_keyring 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # '[' -n 65711 ']' 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@333 -- # killprocess 65711 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 65711 ']' 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 65711 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65711 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:36.287 killing process with pid 65711 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65711' 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 65711 00:13:36.287 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 65711 00:13:36.287 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:13:36.287 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # nvmf_fini 00:13:36.287 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@264 -- # local dev 00:13:36.287 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:36.287 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:36.287 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:36.287 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # continue 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # continue 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@284 -- # iptr 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@538 -- # iptables-save 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@538 -- # iptables-restore 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:13:36.545 00:13:36.545 real 0m2.975s 00:13:36.545 user 0m8.836s 00:13:36.545 sys 0m0.777s 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.545 ************************************ 00:13:36.545 END TEST nvmf_bdevio 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:36.545 ************************************ 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # [[ tcp == \t\c\p ]] 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # [[ virt != phy ]] 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.545 13:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:36.545 ************************************ 00:13:36.545 START TEST nvmf_target_multipath 00:13:36.545 ************************************ 00:13:36.546 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:36.805 * Looking for test storage... 00:13:36.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:36.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.805 --rc genhtml_branch_coverage=1 00:13:36.805 --rc genhtml_function_coverage=1 00:13:36.805 --rc genhtml_legend=1 00:13:36.805 --rc geninfo_all_blocks=1 00:13:36.805 --rc geninfo_unexecuted_blocks=1 00:13:36.805 00:13:36.805 ' 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:36.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.805 --rc genhtml_branch_coverage=1 00:13:36.805 --rc genhtml_function_coverage=1 00:13:36.805 --rc genhtml_legend=1 00:13:36.805 --rc geninfo_all_blocks=1 00:13:36.805 --rc geninfo_unexecuted_blocks=1 00:13:36.805 00:13:36.805 ' 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:36.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.805 --rc genhtml_branch_coverage=1 00:13:36.805 --rc genhtml_function_coverage=1 00:13:36.805 --rc genhtml_legend=1 00:13:36.805 --rc geninfo_all_blocks=1 00:13:36.805 --rc geninfo_unexecuted_blocks=1 00:13:36.805 00:13:36.805 ' 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:36.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.805 --rc genhtml_branch_coverage=1 00:13:36.805 --rc genhtml_function_coverage=1 00:13:36.805 --rc genhtml_legend=1 00:13:36.805 --rc geninfo_all_blocks=1 00:13:36.805 --rc geninfo_unexecuted_blocks=1 00:13:36.805 00:13:36.805 ' 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:36.805 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:36.806 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # prepare_net_devs 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # local -g is_hw=no 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # remove_target_ns 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@276 -- # nvmf_veth_init 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@233 -- # create_target_ns 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@234 -- # create_main_bridge 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@114 -- # delete_main_bridge 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@130 -- # return 0 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:36.806 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@160 -- # set_up initiator0 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@160 -- # set_up target0 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set target0 up 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@161 -- # set_up target0_br 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_ns target0 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:36.807 10.0.0.1 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:13:36.807 10.0.0.2 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@75 -- # set_up initiator0 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:13:36.807 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@138 -- # set_up target0_br 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@160 -- # set_up initiator1 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@160 -- # set_up target1 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:37.067 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set target1 up 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@161 -- # set_up target1_br 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_ns target1 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772163 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:13:37.068 10.0.0.3 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772164 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:13:37.068 10.0.0.4 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@75 -- # set_up initiator1 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@138 -- # set_up target1_br 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 2 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@110 -- # echo initiator0 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # dev=initiator0 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:37.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:13:37.068 00:13:37.068 --- 10.0.0.1 ping statistics --- 00:13:37.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.068 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:37.068 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@107 -- # local dev=target0 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@110 -- # echo target0 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # dev=target0 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:37.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:13:37.069 00:13:37.069 --- 10.0.0.2 ping statistics --- 00:13:37.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.069 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@110 -- # echo initiator1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # dev=initiator1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:13:37.069 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:37.069 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:13:37.069 00:13:37.069 --- 10.0.0.3 ping statistics --- 00:13:37.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.069 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@107 -- # local dev=target1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@110 -- # echo target1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # dev=target1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:13:37.069 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:37.069 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.104 ms 00:13:37.069 00:13:37.069 --- 10.0.0.4 ping statistics --- 00:13:37.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.069 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # return 0 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@110 -- # echo initiator0 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # dev=initiator0 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:37.069 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@110 -- # echo initiator1 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # dev=initiator1 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@107 -- # local dev=target0 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@110 -- # echo target0 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # dev=target0 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@107 -- # local dev=target1 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@110 -- # echo target1 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@168 -- # dev=target1 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # nvmfappstart -m 0xF 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@324 -- # nvmfpid=65977 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # waitforlisten 65977 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 65977 ']' 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.328 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.329 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.329 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.329 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:37.329 [2024-09-27 13:18:39.041900] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:13:37.329 [2024-09-27 13:18:39.042010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.591 [2024-09-27 13:18:39.181952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.591 [2024-09-27 13:18:39.239765] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.591 [2024-09-27 13:18:39.239815] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.591 [2024-09-27 13:18:39.239827] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.591 [2024-09-27 13:18:39.239835] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.591 [2024-09-27 13:18:39.239843] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.591 [2024-09-27 13:18:39.240003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.591 [2024-09-27 13:18:39.240143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.591 [2024-09-27 13:18:39.240715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.591 [2024-09-27 13:18:39.240745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.591 [2024-09-27 13:18:39.269675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:38.524 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:38.524 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:13:38.524 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:13:38.524 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:38.524 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:38.524 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.524 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:38.524 [2024-09-27 13:18:40.310642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.524 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:13:38.780 Malloc0 00:13:38.780 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:13:39.037 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:39.295 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.552 [2024-09-27 13:18:41.325656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.552 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:13:39.810 [2024-09-27 13:18:41.577878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:13:39.810 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@55 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid=1dd592da-03b1-46ba-b90a-3aebb25e3723 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:13:40.068 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@56 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid=1dd592da-03b1-46ba-b90a-3aebb25e3723 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:13:40.068 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # waitforserial SPDKISFASTANDAWESOME 00:13:40.068 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:13:40.068 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.068 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:40.068 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@60 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@60 -- # subsystem=nvme-subsys0 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # paths=("${paths[@]##*/}") 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # (( 2 == 2 )) 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@66 -- # p0=nvme0c0n1 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # p1=nvme0c1n1 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # check_ana_state nvme0c0n1 optimized 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@70 -- # check_ana_state nvme0c1n1 optimized 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # echo numa 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # fio_pid=66068 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # sleep 1 00:13:42.599 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:42.599 [global] 00:13:42.599 thread=1 00:13:42.599 invalidate=1 00:13:42.599 rw=randrw 00:13:42.599 time_based=1 00:13:42.599 runtime=6 00:13:42.599 ioengine=libaio 00:13:42.599 direct=1 00:13:42.599 bs=4096 00:13:42.599 iodepth=128 00:13:42.599 norandommap=0 00:13:42.599 numjobs=1 00:13:42.599 00:13:42.599 verify_dump=1 00:13:42.599 verify_backlog=512 00:13:42.599 verify_state_save=0 00:13:42.599 do_verify=1 00:13:42.599 verify=crc32c-intel 00:13:42.599 [job0] 00:13:42.599 filename=/dev/nvme0n1 00:13:42.599 Could not set queue depth (nvme0n1) 00:13:42.599 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:42.599 fio-3.35 00:13:42.599 Starting 1 thread 00:13:43.166 13:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:13:43.425 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:43.684 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@83 -- # check_ana_state nvme0c0n1 inaccessible 00:13:43.684 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:43.684 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:43.684 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:43.684 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:43.684 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:43.684 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@84 -- # check_ana_state nvme0c1n1 non-optimized 00:13:43.684 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:43.684 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:43.684 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:43.684 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:43.684 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:43.684 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:13:43.941 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:44.203 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@89 -- # check_ana_state nvme0c0n1 non-optimized 00:13:44.203 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:44.463 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:44.463 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:44.463 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:44.463 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:44.463 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # check_ana_state nvme0c1n1 inaccessible 00:13:44.463 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:44.463 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:44.463 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:44.463 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:44.463 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:44.463 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # wait 66068 00:13:48.696 00:13:48.696 job0: (groupid=0, jobs=1): err= 0: pid=66089: Fri Sep 27 13:18:50 2024 00:13:48.696 read: IOPS=10.2k, BW=39.9MiB/s (41.9MB/s)(240MiB/6003msec) 00:13:48.696 slat (usec): min=4, max=8329, avg=57.56, stdev=234.34 00:13:48.696 clat (usec): min=1577, max=16476, avg=8540.54, stdev=1545.61 00:13:48.696 lat (usec): min=1598, max=16510, avg=8598.10, stdev=1551.30 00:13:48.696 clat percentiles (usec): 00:13:48.696 | 1.00th=[ 4359], 5.00th=[ 6390], 10.00th=[ 7242], 20.00th=[ 7701], 00:13:48.696 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8586], 00:13:48.696 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[10028], 95.00th=[12125], 00:13:48.696 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14222], 99.95th=[14353], 00:13:48.696 | 99.99th=[14877] 00:13:48.696 bw ( KiB/s): min=11168, max=25560, per=51.21%, avg=20938.73, stdev=4883.52, samples=11 00:13:48.696 iops : min= 2792, max= 6390, avg=5234.64, stdev=1220.86, samples=11 00:13:48.696 write: IOPS=5922, BW=23.1MiB/s (24.3MB/s)(126MiB/5441msec); 0 zone resets 00:13:48.696 slat (usec): min=16, max=1898, avg=66.60, stdev=163.37 00:13:48.696 clat (usec): min=1455, max=14554, avg=7368.26, stdev=1323.63 00:13:48.696 lat (usec): min=1479, max=14579, avg=7434.87, stdev=1327.78 00:13:48.696 clat percentiles (usec): 00:13:48.696 | 1.00th=[ 3359], 5.00th=[ 4293], 10.00th=[ 5800], 20.00th=[ 6849], 00:13:48.696 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7701], 00:13:48.696 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8848], 00:13:48.696 | 99.00th=[11469], 99.50th=[11994], 99.90th=[13173], 99.95th=[13435], 00:13:48.696 | 99.99th=[14484] 00:13:48.696 bw ( KiB/s): min=11504, max=25064, per=88.53%, avg=20973.36, stdev=4660.89, samples=11 00:13:48.696 iops : min= 2876, max= 6266, avg=5243.45, stdev=1165.25, samples=11 00:13:48.696 lat (msec) : 2=0.02%, 4=1.57%, 10=91.18%, 20=7.23% 00:13:48.696 cpu : usr=5.28%, sys=21.49%, ctx=5360, majf=0, minf=54 00:13:48.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:48.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.696 issued rwts: total=61366,32224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.696 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.696 00:13:48.696 Run status group 0 (all jobs): 00:13:48.696 READ: bw=39.9MiB/s (41.9MB/s), 39.9MiB/s-39.9MiB/s (41.9MB/s-41.9MB/s), io=240MiB (251MB), run=6003-6003msec 00:13:48.696 WRITE: bw=23.1MiB/s (24.3MB/s), 23.1MiB/s-23.1MiB/s (24.3MB/s-24.3MB/s), io=126MiB (132MB), run=5441-5441msec 00:13:48.696 00:13:48.696 Disk stats (read/write): 00:13:48.696 nvme0n1: ios=60539/31624, merge=0/0, ticks=497575/219237, in_queue=716812, util=98.56% 00:13:48.696 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:13:48.696 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@97 -- # check_ana_state nvme0c0n1 optimized 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # check_ana_state nvme0c1n1 optimized 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # echo round-robin 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # fio_pid=66174 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:48.957 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # sleep 1 00:13:48.957 [global] 00:13:48.957 thread=1 00:13:48.957 invalidate=1 00:13:48.957 rw=randrw 00:13:48.957 time_based=1 00:13:48.957 runtime=6 00:13:48.957 ioengine=libaio 00:13:48.957 direct=1 00:13:48.957 bs=4096 00:13:48.957 iodepth=128 00:13:48.957 norandommap=0 00:13:48.957 numjobs=1 00:13:48.957 00:13:48.957 verify_dump=1 00:13:48.957 verify_backlog=512 00:13:48.957 verify_state_save=0 00:13:48.957 do_verify=1 00:13:48.957 verify=crc32c-intel 00:13:48.957 [job0] 00:13:48.957 filename=/dev/nvme0n1 00:13:48.957 Could not set queue depth (nvme0n1) 00:13:49.214 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.214 fio-3.35 00:13:49.214 Starting 1 thread 00:13:50.153 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:13:50.411 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:50.669 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@111 -- # check_ana_state nvme0c0n1 inaccessible 00:13:50.669 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:50.669 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:50.669 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:50.669 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:50.669 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:50.669 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@112 -- # check_ana_state nvme0c1n1 non-optimized 00:13:50.669 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:50.669 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:50.669 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:50.669 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:50.669 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:50.669 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@114 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:13:50.926 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:51.184 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@117 -- # check_ana_state nvme0c0n1 non-optimized 00:13:51.184 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:51.184 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:51.184 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:51.184 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:51.184 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:51.184 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # check_ana_state nvme0c1n1 inaccessible 00:13:51.184 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:51.184 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:51.184 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:51.184 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:51.184 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:51.184 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # wait 66174 00:13:55.396 00:13:55.396 job0: (groupid=0, jobs=1): err= 0: pid=66196: Fri Sep 27 13:18:56 2024 00:13:55.396 read: IOPS=11.4k, BW=44.7MiB/s (46.9MB/s)(268MiB/6002msec) 00:13:55.396 slat (usec): min=4, max=8094, avg=43.42, stdev=195.67 00:13:55.396 clat (usec): min=757, max=16205, avg=7673.25, stdev=1902.21 00:13:55.396 lat (usec): min=766, max=16236, avg=7716.67, stdev=1917.89 00:13:55.396 clat percentiles (usec): 00:13:55.396 | 1.00th=[ 3458], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 5997], 00:13:55.396 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8029], 60.00th=[ 8225], 00:13:55.396 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[11469], 00:13:55.396 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13829], 99.95th=[14615], 00:13:55.396 | 99.99th=[15270] 00:13:55.396 bw ( KiB/s): min=12912, max=40720, per=52.76%, avg=24146.18, stdev=8144.93, samples=11 00:13:55.396 iops : min= 3228, max=10180, avg=6036.55, stdev=2036.23, samples=11 00:13:55.396 write: IOPS=6680, BW=26.1MiB/s (27.4MB/s)(141MiB/5405msec); 0 zone resets 00:13:55.396 slat (usec): min=13, max=3230, avg=55.21, stdev=140.41 00:13:55.396 clat (usec): min=1339, max=14336, avg=6511.99, stdev=1778.43 00:13:55.396 lat (usec): min=1381, max=14500, avg=6567.20, stdev=1793.03 00:13:55.396 clat percentiles (usec): 00:13:55.396 | 1.00th=[ 2769], 5.00th=[ 3425], 10.00th=[ 3884], 20.00th=[ 4555], 00:13:55.396 | 30.00th=[ 5342], 40.00th=[ 6718], 50.00th=[ 7177], 60.00th=[ 7439], 00:13:55.396 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8455], 00:13:55.396 | 99.00th=[10945], 99.50th=[11731], 99.90th=[13173], 99.95th=[13435], 00:13:55.396 | 99.99th=[13960] 00:13:55.396 bw ( KiB/s): min=13712, max=40056, per=90.46%, avg=24170.91, stdev=7940.22, samples=11 00:13:55.396 iops : min= 3428, max=10014, avg=6042.73, stdev=1985.05, samples=11 00:13:55.396 lat (usec) : 1000=0.01% 00:13:55.396 lat (msec) : 2=0.08%, 4=5.66%, 10=89.70%, 20=4.56% 00:13:55.396 cpu : usr=5.70%, sys=23.20%, ctx=5772, majf=0, minf=90 00:13:55.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:55.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:55.396 issued rwts: total=68677,36106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:55.396 00:13:55.396 Run status group 0 (all jobs): 00:13:55.396 READ: bw=44.7MiB/s (46.9MB/s), 44.7MiB/s-44.7MiB/s (46.9MB/s-46.9MB/s), io=268MiB (281MB), run=6002-6002msec 00:13:55.396 WRITE: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=141MiB (148MB), run=5405-5405msec 00:13:55.396 00:13:55.396 Disk stats (read/write): 00:13:55.396 nvme0n1: ios=67826/35541, merge=0/0, ticks=496529/216416, in_queue=712945, util=98.62% 00:13:55.396 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@122 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:55.396 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.396 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:13:55.396 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:55.396 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.396 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:55.396 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.396 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:13:55.396 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # rm -f ./local-job0-0-verify.state 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@128 -- # rm -f ./local-job1-1-verify.state 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # trap - SIGINT SIGTERM EXIT 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # nvmftestfini 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@331 -- # nvmfcleanup 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:55.654 rmmod nvme_tcp 00:13:55.654 rmmod nvme_fabrics 00:13:55.654 rmmod nvme_keyring 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # '[' -n 65977 ']' 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@333 -- # killprocess 65977 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 65977 ']' 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 65977 00:13:55.654 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:13:55.655 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.655 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65977 00:13:55.912 killing process with pid 65977 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65977' 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 65977 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 65977 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # nvmf_fini 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@264 -- # local dev 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:13:55.912 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@271 -- # continue 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@271 -- # continue 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@284 -- # iptr 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@538 -- # iptables-restore 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@538 -- # iptables-save 00:13:56.170 00:13:56.170 real 0m19.517s 00:13:56.170 user 1m12.979s 00:13:56.170 sys 0m9.439s 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.170 ************************************ 00:13:56.170 END TEST nvmf_target_multipath 00:13:56.170 ************************************ 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:56.170 ************************************ 00:13:56.170 START TEST nvmf_zcopy 00:13:56.170 ************************************ 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:56.170 * Looking for test storage... 00:13:56.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:13:56.170 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.430 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:56.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.430 --rc genhtml_branch_coverage=1 00:13:56.430 --rc genhtml_function_coverage=1 00:13:56.430 --rc genhtml_legend=1 00:13:56.430 --rc geninfo_all_blocks=1 00:13:56.430 --rc geninfo_unexecuted_blocks=1 00:13:56.430 00:13:56.431 ' 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:56.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.431 --rc genhtml_branch_coverage=1 00:13:56.431 --rc genhtml_function_coverage=1 00:13:56.431 --rc genhtml_legend=1 00:13:56.431 --rc geninfo_all_blocks=1 00:13:56.431 --rc geninfo_unexecuted_blocks=1 00:13:56.431 00:13:56.431 ' 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:56.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.431 --rc genhtml_branch_coverage=1 00:13:56.431 --rc genhtml_function_coverage=1 00:13:56.431 --rc genhtml_legend=1 00:13:56.431 --rc geninfo_all_blocks=1 00:13:56.431 --rc geninfo_unexecuted_blocks=1 00:13:56.431 00:13:56.431 ' 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:56.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.431 --rc genhtml_branch_coverage=1 00:13:56.431 --rc genhtml_function_coverage=1 00:13:56.431 --rc genhtml_legend=1 00:13:56.431 --rc geninfo_all_blocks=1 00:13:56.431 --rc geninfo_unexecuted_blocks=1 00:13:56.431 00:13:56.431 ' 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:56.431 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # prepare_net_devs 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # local -g is_hw=no 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # remove_target_ns 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@276 -- # nvmf_veth_init 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@233 -- # create_target_ns 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@234 -- # create_main_bridge 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@114 -- # delete_main_bridge 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@130 -- # return 0 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:13:56.431 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@160 -- # set_up initiator0 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@160 -- # set_up target0 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set target0 up 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@161 -- # set_up target0_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_ns target0 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:13:56.432 10.0.0.1 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:13:56.432 10.0.0.2 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@75 -- # set_up initiator0 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@138 -- # set_up target0_br 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:13:56.432 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:13:56.433 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:13:56.433 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:56.433 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:56.433 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:56.433 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:56.433 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:13:56.433 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:13:56.433 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:13:56.433 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:13:56.433 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@160 -- # set_up initiator1 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@160 -- # set_up target1 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set target1 up 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@161 -- # set_up target1_br 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_ns target1 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772163 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:13:56.692 10.0.0.3 00:13:56.692 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772164 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:13:56.693 10.0.0.4 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@75 -- # set_up initiator1 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@138 -- # set_up target1_br 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 2 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo initiator0 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=initiator0 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:56.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:13:56.693 00:13:56.693 --- 10.0.0.1 ping statistics --- 00:13:56.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.693 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo target0 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=target0 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:56.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:13:56.693 00:13:56.693 --- 10.0.0.2 ping statistics --- 00:13:56.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.693 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:56.693 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo initiator1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=initiator1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:13:56.694 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:56.694 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.150 ms 00:13:56.694 00:13:56.694 --- 10.0.0.3 ping statistics --- 00:13:56.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.694 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo target1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=target1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:13:56.694 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:56.694 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:13:56.694 00:13:56.694 --- 10.0.0.4 ping statistics --- 00:13:56.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.694 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # return 0 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo initiator0 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=initiator0 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo initiator1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=initiator1 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo target0 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=target0 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:56.694 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target1 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo target1 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=target1 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@324 -- # nvmfpid=66499 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # waitforlisten 66499 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 66499 ']' 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.953 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:56.953 [2024-09-27 13:18:58.632020] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:13:56.953 [2024-09-27 13:18:58.632118] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.953 [2024-09-27 13:18:58.770291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.212 [2024-09-27 13:18:58.848935] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.212 [2024-09-27 13:18:58.849223] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.212 [2024-09-27 13:18:58.849499] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.212 [2024-09-27 13:18:58.849697] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.212 [2024-09-27 13:18:58.849840] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.212 [2024-09-27 13:18:58.849996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.212 [2024-09-27 13:18:58.883006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:58.146 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:58.146 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:13:58.146 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:13:58.146 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:58.146 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.146 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.146 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.147 [2024-09-27 13:18:59.688459] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@20 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.147 [2024-09-27 13:18:59.704565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.147 malloc0 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # gen_nvmf_target_json 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # config=() 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # local subsystem config 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # for subsystem in "${@:-1}" 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # config+=("$(cat <<-EOF 00:13:58.147 { 00:13:58.147 "params": { 00:13:58.147 "name": "Nvme$subsystem", 00:13:58.147 "trtype": "$TEST_TRANSPORT", 00:13:58.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:58.147 "adrfam": "ipv4", 00:13:58.147 "trsvcid": "$NVMF_PORT", 00:13:58.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:58.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:58.147 "hdgst": ${hdgst:-false}, 00:13:58.147 "ddgst": ${ddgst:-false} 00:13:58.147 }, 00:13:58.147 "method": "bdev_nvme_attach_controller" 00:13:58.147 } 00:13:58.147 EOF 00:13:58.147 )") 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # cat 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # jq . 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@393 -- # IFS=, 00:13:58.147 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # printf '%s\n' '{ 00:13:58.147 "params": { 00:13:58.147 "name": "Nvme1", 00:13:58.147 "trtype": "tcp", 00:13:58.147 "traddr": "10.0.0.2", 00:13:58.147 "adrfam": "ipv4", 00:13:58.147 "trsvcid": "4420", 00:13:58.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.147 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:58.147 "hdgst": false, 00:13:58.147 "ddgst": false 00:13:58.147 }, 00:13:58.147 "method": "bdev_nvme_attach_controller" 00:13:58.147 }' 00:13:58.147 [2024-09-27 13:18:59.802805] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:13:58.147 [2024-09-27 13:18:59.802915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66532 ] 00:13:58.147 [2024-09-27 13:18:59.941611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.405 [2024-09-27 13:19:00.001203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.405 [2024-09-27 13:19:00.038751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:58.405 Running I/O for 10 seconds... 00:14:08.315 5762.00 IOPS, 45.02 MiB/s 5803.00 IOPS, 45.34 MiB/s 5814.67 IOPS, 45.43 MiB/s 5829.50 IOPS, 45.54 MiB/s 5838.20 IOPS, 45.61 MiB/s 5842.00 IOPS, 45.64 MiB/s 5843.14 IOPS, 45.65 MiB/s 5843.88 IOPS, 45.66 MiB/s 5840.89 IOPS, 45.63 MiB/s 5846.30 IOPS, 45.67 MiB/s 00:14:08.315 Latency(us) 00:14:08.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.315 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:08.315 Verification LBA range: start 0x0 length 0x1000 00:14:08.315 Nvme1n1 : 10.01 5850.53 45.71 0.00 0.00 21807.75 1310.72 32172.22 00:14:08.315 =================================================================================================================== 00:14:08.315 Total : 5850.53 45.71 0.00 0.00 21807.75 1310.72 32172.22 00:14:08.574 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@34 -- # perfpid=66649 00:14:08.574 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@36 -- # xtrace_disable 00:14:08.574 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:08.574 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:08.574 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # gen_nvmf_target_json 00:14:08.574 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # config=() 00:14:08.574 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # local subsystem config 00:14:08.574 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # for subsystem in "${@:-1}" 00:14:08.574 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # config+=("$(cat <<-EOF 00:14:08.574 { 00:14:08.574 "params": { 00:14:08.574 "name": "Nvme$subsystem", 00:14:08.574 "trtype": "$TEST_TRANSPORT", 00:14:08.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:08.574 "adrfam": "ipv4", 00:14:08.574 "trsvcid": "$NVMF_PORT", 00:14:08.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:08.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:08.574 "hdgst": ${hdgst:-false}, 00:14:08.574 "ddgst": ${ddgst:-false} 00:14:08.574 }, 00:14:08.574 "method": "bdev_nvme_attach_controller" 00:14:08.574 } 00:14:08.574 EOF 00:14:08.574 )") 00:14:08.574 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # cat 00:14:08.574 [2024-09-27 13:19:10.324878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.574 [2024-09-27 13:19:10.324922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.574 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # jq . 00:14:08.574 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@393 -- # IFS=, 00:14:08.574 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # printf '%s\n' '{ 00:14:08.574 "params": { 00:14:08.574 "name": "Nvme1", 00:14:08.574 "trtype": "tcp", 00:14:08.574 "traddr": "10.0.0.2", 00:14:08.574 "adrfam": "ipv4", 00:14:08.574 "trsvcid": "4420", 00:14:08.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:08.574 "hdgst": false, 00:14:08.574 "ddgst": false 00:14:08.574 }, 00:14:08.574 "method": "bdev_nvme_attach_controller" 00:14:08.574 }' 00:14:08.574 [2024-09-27 13:19:10.332842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.574 [2024-09-27 13:19:10.332871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.574 [2024-09-27 13:19:10.344872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.574 [2024-09-27 13:19:10.344916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.574 [2024-09-27 13:19:10.356854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.574 [2024-09-27 13:19:10.356882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.574 [2024-09-27 13:19:10.368854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.574 [2024-09-27 13:19:10.368886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.574 [2024-09-27 13:19:10.380852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.574 [2024-09-27 13:19:10.380879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.574 [2024-09-27 13:19:10.388852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.574 [2024-09-27 13:19:10.388880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.574 [2024-09-27 13:19:10.398974] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:14:08.574 [2024-09-27 13:19:10.399844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66649 ] 00:14:08.574 [2024-09-27 13:19:10.400861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.574 [2024-09-27 13:19:10.400889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.574 [2024-09-27 13:19:10.412859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.574 [2024-09-27 13:19:10.412887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.832 [2024-09-27 13:19:10.424881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.832 [2024-09-27 13:19:10.424913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.832 [2024-09-27 13:19:10.436868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.832 [2024-09-27 13:19:10.436898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.832 [2024-09-27 13:19:10.448877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.448908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.460875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.460904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.472879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.472909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.484882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.484911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.496888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.496917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.508891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.508920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.520895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.520923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.532898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.532926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.544903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.544931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.546494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.833 [2024-09-27 13:19:10.552923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.552959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.560913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.560941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.572918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.572950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.580943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.580982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.588934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.588966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.596930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.596959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.604630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.833 [2024-09-27 13:19:10.604924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.604948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.612924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.612952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.620949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.620988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.628958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.628997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.636959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.637003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.641697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:08.833 [2024-09-27 13:19:10.644951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.644983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.652962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.653003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.660943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.660972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.833 [2024-09-27 13:19:10.668945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.833 [2024-09-27 13:19:10.668974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.091 [2024-09-27 13:19:10.680977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.091 [2024-09-27 13:19:10.681013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.091 [2024-09-27 13:19:10.688977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.091 [2024-09-27 13:19:10.689008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.697018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.697051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.705003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.705035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.713014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.713047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.721022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.721053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.729022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.729051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.737143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.737183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.745128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.745158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 Running I/O for 5 seconds... 00:14:09.092 [2024-09-27 13:19:10.753148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.753184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.766825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.766861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.782900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.782939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.800491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.800528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.811449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.811484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.824875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.824911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.841023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.841061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.858145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.858181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.867805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.867842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.883423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.883459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.900482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.900520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.910471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.910507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.921900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.921936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.092 [2024-09-27 13:19:10.932701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.092 [2024-09-27 13:19:10.932736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.350 [2024-09-27 13:19:10.943609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.350 [2024-09-27 13:19:10.943646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.350 [2024-09-27 13:19:10.956405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:10.956442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:10.973740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:10.973774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:10.989350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:10.989387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:10.999305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:10.999341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.011057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.011094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.022100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.022136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.036935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.036972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.047754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.047789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.062509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.062547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.079424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.079460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.088943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.088979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.104133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.104169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.121956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.121993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.131878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.131914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.143239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.143274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.154215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.154253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.168748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.168782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.178390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.178426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.351 [2024-09-27 13:19:11.194164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.351 [2024-09-27 13:19:11.194200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.210238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.210275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.219464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.219501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.235101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.235138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.251567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.251602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.261128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.261164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.276183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.276220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.286320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.286356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.300669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.300718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.317872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.317908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.327858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.327893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.342426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.342464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.358627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.358662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.376259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.376297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.391025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.391063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.406137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.406177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.415348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.415384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.431518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.431555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.610 [2024-09-27 13:19:11.441289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.610 [2024-09-27 13:19:11.441325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.457653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.457703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.475394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.475433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.490253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.490290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.506173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.506208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.522578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.522615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.540655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.540703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.555770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.555805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.565186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.565222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.580718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.580755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.595906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.595944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.613541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.613578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.628355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.628391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.644049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.644085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.663348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.663384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.678276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.678313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.690244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.690280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.869 [2024-09-27 13:19:11.707501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.869 [2024-09-27 13:19:11.707537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.722308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.722345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.739900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.739936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 11567.00 IOPS, 90.37 MiB/s [2024-09-27 13:19:11.754552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.754587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.770178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.770213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.787822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.787857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.802448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.802484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.818041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.818078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.836307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.836344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.851250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.851301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.866901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.866936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.883341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.883377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.900788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.900822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.915964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.915999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.925248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.925300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.941385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.941421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.127 [2024-09-27 13:19:11.958184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.127 [2024-09-27 13:19:11.958235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.385 [2024-09-27 13:19:11.976336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.385 [2024-09-27 13:19:11.976375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.385 [2024-09-27 13:19:11.991314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.385 [2024-09-27 13:19:11.991351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.385 [2024-09-27 13:19:12.009074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.385 [2024-09-27 13:19:12.009126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.385 [2024-09-27 13:19:12.024071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.385 [2024-09-27 13:19:12.024107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.385 [2024-09-27 13:19:12.034119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.034155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.386 [2024-09-27 13:19:12.049401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.049453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.386 [2024-09-27 13:19:12.065846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.065881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.386 [2024-09-27 13:19:12.083714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.083763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.386 [2024-09-27 13:19:12.098499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.098534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.386 [2024-09-27 13:19:12.108462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.108497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.386 [2024-09-27 13:19:12.124025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.124061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.386 [2024-09-27 13:19:12.138792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.138834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.386 [2024-09-27 13:19:12.154291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.154326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.386 [2024-09-27 13:19:12.172480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.172518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.386 [2024-09-27 13:19:12.187391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.187426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.386 [2024-09-27 13:19:12.197333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.197384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.386 [2024-09-27 13:19:12.213767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.213801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.386 [2024-09-27 13:19:12.223984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.386 [2024-09-27 13:19:12.224020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.643 [2024-09-27 13:19:12.239287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.643 [2024-09-27 13:19:12.239337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.643 [2024-09-27 13:19:12.257226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.643 [2024-09-27 13:19:12.257261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.643 [2024-09-27 13:19:12.271939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.643 [2024-09-27 13:19:12.271975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.643 [2024-09-27 13:19:12.287430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.643 [2024-09-27 13:19:12.287481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.643 [2024-09-27 13:19:12.306682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.643 [2024-09-27 13:19:12.306745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.643 [2024-09-27 13:19:12.322033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.643 [2024-09-27 13:19:12.322069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.643 [2024-09-27 13:19:12.340312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.643 [2024-09-27 13:19:12.340348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.643 [2024-09-27 13:19:12.355072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.644 [2024-09-27 13:19:12.355107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.644 [2024-09-27 13:19:12.371005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.644 [2024-09-27 13:19:12.371042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.644 [2024-09-27 13:19:12.388066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.644 [2024-09-27 13:19:12.388101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.644 [2024-09-27 13:19:12.406096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.644 [2024-09-27 13:19:12.406132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.644 [2024-09-27 13:19:12.420954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.644 [2024-09-27 13:19:12.420990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.644 [2024-09-27 13:19:12.430493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.644 [2024-09-27 13:19:12.430530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.644 [2024-09-27 13:19:12.446336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.644 [2024-09-27 13:19:12.446372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.644 [2024-09-27 13:19:12.463089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.644 [2024-09-27 13:19:12.463126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.644 [2024-09-27 13:19:12.481198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.644 [2024-09-27 13:19:12.481249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.495318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.495353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.511056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.511092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.527943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.527976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.544970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.545006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.561128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.561180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.578739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.578790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.593766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.593796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.608758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.608793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.624881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.624932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.641620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.641658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.657799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.657848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.676014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.676049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.691209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.691244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.700612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.700662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.717857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.717908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.903 [2024-09-27 13:19:12.735380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.903 [2024-09-27 13:19:12.735415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.162 11567.50 IOPS, 90.37 MiB/s [2024-09-27 13:19:12.750346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.162 [2024-09-27 13:19:12.750401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.162 [2024-09-27 13:19:12.766258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.162 [2024-09-27 13:19:12.766325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.162 [2024-09-27 13:19:12.783636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.162 [2024-09-27 13:19:12.783726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.162 [2024-09-27 13:19:12.798767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.798843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:12.814942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.814988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:12.832614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.832664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:12.847882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.847924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:12.857576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.857632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:12.873446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.873509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:12.889969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.890040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:12.907888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.907948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:12.923149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.923201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:12.932949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.932987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:12.949789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.949840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:12.964864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.964914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:12.980158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.980194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:12.990024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:12.990062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.163 [2024-09-27 13:19:13.006016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.163 [2024-09-27 13:19:13.006051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.422 [2024-09-27 13:19:13.022898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.422 [2024-09-27 13:19:13.022935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.422 [2024-09-27 13:19:13.039138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.422 [2024-09-27 13:19:13.039176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.422 [2024-09-27 13:19:13.056467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.422 [2024-09-27 13:19:13.056504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.422 [2024-09-27 13:19:13.072385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.422 [2024-09-27 13:19:13.072423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.422 [2024-09-27 13:19:13.089641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.422 [2024-09-27 13:19:13.089691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.422 [2024-09-27 13:19:13.105503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.422 [2024-09-27 13:19:13.105540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.422 [2024-09-27 13:19:13.115188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.422 [2024-09-27 13:19:13.115225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.422 [2024-09-27 13:19:13.130843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.422 [2024-09-27 13:19:13.130882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.422 [2024-09-27 13:19:13.148934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.422 [2024-09-27 13:19:13.148983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.422 [2024-09-27 13:19:13.164150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.422 [2024-09-27 13:19:13.164185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.422 [2024-09-27 13:19:13.174043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.422 [2024-09-27 13:19:13.174080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.422 [2024-09-27 13:19:13.190390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.422 [2024-09-27 13:19:13.190428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.423 [2024-09-27 13:19:13.199809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.423 [2024-09-27 13:19:13.199844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.423 [2024-09-27 13:19:13.216318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.423 [2024-09-27 13:19:13.216365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.423 [2024-09-27 13:19:13.233245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.423 [2024-09-27 13:19:13.233302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.423 [2024-09-27 13:19:13.252205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.423 [2024-09-27 13:19:13.252256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.423 [2024-09-27 13:19:13.267246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.423 [2024-09-27 13:19:13.267284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.283235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.283270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.300966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.301003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.315738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.315772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.332363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.332399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.348791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.348840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.365241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.365277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.382134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.382169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.400052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.400088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.414661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.414714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.430326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.430360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.439636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.439672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.455658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.455708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.473417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.473452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.490136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.490172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.508297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.508333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.682 [2024-09-27 13:19:13.523399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.682 [2024-09-27 13:19:13.523436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.533387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.533422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.548567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.548604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.565126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.565176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.581759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.581795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.599927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.599963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.614705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.614737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.624350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.624386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.638723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.638755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.655592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.655630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.672045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.672082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.689810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.689846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.705855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.705892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.715446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.715483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.731105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.731141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 11544.00 IOPS, 90.19 MiB/s [2024-09-27 13:19:13.747781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.747811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.764251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.764289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.941 [2024-09-27 13:19:13.782401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.941 [2024-09-27 13:19:13.782437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.200 [2024-09-27 13:19:13.795701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.200 [2024-09-27 13:19:13.795742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.200 [2024-09-27 13:19:13.811229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.200 [2024-09-27 13:19:13.811265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.200 [2024-09-27 13:19:13.829047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.200 [2024-09-27 13:19:13.829083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.200 [2024-09-27 13:19:13.844126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.200 [2024-09-27 13:19:13.844162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.200 [2024-09-27 13:19:13.853347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.201 [2024-09-27 13:19:13.853384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.201 [2024-09-27 13:19:13.870051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.201 [2024-09-27 13:19:13.870088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.201 [2024-09-27 13:19:13.888298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.201 [2024-09-27 13:19:13.888338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.201 [2024-09-27 13:19:13.903031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.201 [2024-09-27 13:19:13.903073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.201 [2024-09-27 13:19:13.919436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.201 [2024-09-27 13:19:13.919489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.201 [2024-09-27 13:19:13.936402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.201 [2024-09-27 13:19:13.936439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.201 [2024-09-27 13:19:13.952482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.201 [2024-09-27 13:19:13.952522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.201 [2024-09-27 13:19:13.968940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.201 [2024-09-27 13:19:13.968977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.201 [2024-09-27 13:19:13.986529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.201 [2024-09-27 13:19:13.986565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.201 [2024-09-27 13:19:14.001449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.201 [2024-09-27 13:19:14.001500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.201 [2024-09-27 13:19:14.018730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.201 [2024-09-27 13:19:14.018767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.201 [2024-09-27 13:19:14.035076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.201 [2024-09-27 13:19:14.035125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.051335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.051374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.069245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.069297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.084112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.084166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.094092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.094130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.109451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.109512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.125438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.125490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.135593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.135642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.150771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.150823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.168299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.168335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.184054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.184092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.202331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.202370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.217448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.217486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.233425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.233462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.250312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.250348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.268937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.268973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.284118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.284172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.460 [2024-09-27 13:19:14.302190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.460 [2024-09-27 13:19:14.302227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.317014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.317050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.334577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.334613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.349881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.349931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.368214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.368265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.383678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.383757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.401412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.401463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.416154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.416206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.433828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.433864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.447654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.447749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.463500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.463537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.482326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.482365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.497561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.497598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.507485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.507519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.523189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.523241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.541233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.541269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.720 [2024-09-27 13:19:14.556507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.720 [2024-09-27 13:19:14.556543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.566507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.566543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.582002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.582039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.598740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.598788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.615606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.615657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.632523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.632574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.649114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.649166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.667235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.667285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.682416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.682469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.698600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.698652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.716907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.716957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.731822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.731872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.747400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.747452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 11520.00 IOPS, 90.00 MiB/s [2024-09-27 13:19:14.765317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.765370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.780653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.780731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.790481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.790533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.805503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.805539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.979 [2024-09-27 13:19:14.821971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.979 [2024-09-27 13:19:14.822007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:14.840236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:14.840286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:14.855377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:14.855411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:14.865300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:14.865336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:14.881886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:14.881921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:14.898824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:14.898877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:14.914514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:14.914568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:14.932986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:14.933022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:14.948224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:14.948272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:14.958237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:14.958288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:14.973515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:14.973566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:14.983662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:14.983724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:14.998512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:14.998565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:15.014850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:15.014886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:15.024397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:15.024449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:15.040353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:15.040405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:15.057383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:15.057418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.238 [2024-09-27 13:19:15.073261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.238 [2024-09-27 13:19:15.073312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.089870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.089907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.106924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.106960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.123395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.123431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.140002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.140040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.156845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.156897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.172692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.172800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.182586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.182622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.197463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.197500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.211944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.211981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.221263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.221299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.236628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.236664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.252629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.252713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.262037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.262072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.278145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.278181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.294294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.294331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.312478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.312530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.327710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.327759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.497 [2024-09-27 13:19:15.340220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.497 [2024-09-27 13:19:15.340273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.355958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.356025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.372890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.372927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.390342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.390379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.405057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.405123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.422671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.422728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.437750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.437786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.447681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.447730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.463978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.464029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.479842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.479877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.497490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.497528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.512603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.512637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.522351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.522405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.538609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.538662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.555109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.555145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.571643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.571692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.755 [2024-09-27 13:19:15.588897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.755 [2024-09-27 13:19:15.588935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.605130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.605165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.622449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.622485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.637854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.637890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.656351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.656389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.671386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.671426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.687454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.687492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.705524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.705560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.720928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.720969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.738564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.738602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 11505.60 IOPS, 89.89 MiB/s [2024-09-27 13:19:15.752789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.752839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 00:14:14.014 Latency(us) 00:14:14.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.014 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:14.014 Nvme1n1 : 5.01 11505.77 89.89 0.00 0.00 11109.68 4706.68 19541.64 00:14:14.014 =================================================================================================================== 00:14:14.014 Total : 11505.77 89.89 0.00 0.00 11109.68 4706.68 19541.64 00:14:14.014 [2024-09-27 13:19:15.762523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.762560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.774519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.774569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.786561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.786608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.798544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.798606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.810557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.810601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.822557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.822599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.834564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.834619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.846538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.846570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.014 [2024-09-27 13:19:15.858542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.014 [2024-09-27 13:19:15.858574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.272 [2024-09-27 13:19:15.870561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.272 [2024-09-27 13:19:15.870613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.272 [2024-09-27 13:19:15.882547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.272 [2024-09-27 13:19:15.882575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.272 [2024-09-27 13:19:15.894574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.272 [2024-09-27 13:19:15.894616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.272 [2024-09-27 13:19:15.906572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.272 [2024-09-27 13:19:15.906611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.272 [2024-09-27 13:19:15.918548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.272 [2024-09-27 13:19:15.918592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.272 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 37: kill: (66649) - No such process 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@44 -- # wait 66649 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@47 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@48 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:14.272 delay0 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.272 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:14.272 [2024-09-27 13:19:16.107537] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:20.831 Initializing NVMe Controllers 00:14:20.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:20.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:20.832 Initialization complete. Launching workers. 00:14:20.832 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 85 00:14:20.832 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 372, failed to submit 33 00:14:20.832 success 243, unsuccessful 129, failed 0 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@55 -- # nvmftestfini 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # nvmfcleanup 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:20.832 rmmod nvme_tcp 00:14:20.832 rmmod nvme_fabrics 00:14:20.832 rmmod nvme_keyring 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # '[' -n 66499 ']' 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@333 -- # killprocess 66499 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 66499 ']' 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 66499 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66499 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:20.832 killing process with pid 66499 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66499' 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 66499 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 66499 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # nvmf_fini 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@264 -- # local dev 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@267 -- # remove_target_ns 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@268 -- # delete_main_bridge 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # continue 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # continue 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@284 -- # iptr 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@538 -- # iptables-save 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@538 -- # iptables-restore 00:14:20.832 00:14:20.832 real 0m24.747s 00:14:20.832 user 0m40.677s 00:14:20.832 sys 0m6.428s 00:14:20.832 ************************************ 00:14:20.832 END TEST nvmf_zcopy 00:14:20.832 ************************************ 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:14:20.832 00:14:20.832 real 2m33.926s 00:14:20.832 user 6m47.819s 00:14:20.832 sys 0m51.130s 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.832 13:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:20.832 ************************************ 00:14:20.832 END TEST nvmf_target_core 00:14:20.832 ************************************ 00:14:21.090 13:19:22 nvmf_tcp -- nvmf/nvmf.sh@11 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:21.090 13:19:22 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:21.090 13:19:22 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.090 13:19:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:21.090 ************************************ 00:14:21.090 START TEST nvmf_target_extra 00:14:21.090 ************************************ 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:21.090 * Looking for test storage... 00:14:21.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.090 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:21.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.091 --rc genhtml_branch_coverage=1 00:14:21.091 --rc genhtml_function_coverage=1 00:14:21.091 --rc genhtml_legend=1 00:14:21.091 --rc geninfo_all_blocks=1 00:14:21.091 --rc geninfo_unexecuted_blocks=1 00:14:21.091 00:14:21.091 ' 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:21.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.091 --rc genhtml_branch_coverage=1 00:14:21.091 --rc genhtml_function_coverage=1 00:14:21.091 --rc genhtml_legend=1 00:14:21.091 --rc geninfo_all_blocks=1 00:14:21.091 --rc geninfo_unexecuted_blocks=1 00:14:21.091 00:14:21.091 ' 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:21.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.091 --rc genhtml_branch_coverage=1 00:14:21.091 --rc genhtml_function_coverage=1 00:14:21.091 --rc genhtml_legend=1 00:14:21.091 --rc geninfo_all_blocks=1 00:14:21.091 --rc geninfo_unexecuted_blocks=1 00:14:21.091 00:14:21.091 ' 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:21.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.091 --rc genhtml_branch_coverage=1 00:14:21.091 --rc genhtml_function_coverage=1 00:14:21.091 --rc genhtml_legend=1 00:14:21.091 --rc geninfo_all_blocks=1 00:14:21.091 --rc geninfo_unexecuted_blocks=1 00:14:21.091 00:14:21.091 ' 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:21.091 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.091 ************************************ 00:14:21.091 START TEST nvmf_auth_target 00:14:21.091 ************************************ 00:14:21.091 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:21.350 * Looking for test storage... 00:14:21.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:21.350 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:21.350 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:21.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.351 --rc genhtml_branch_coverage=1 00:14:21.351 --rc genhtml_function_coverage=1 00:14:21.351 --rc genhtml_legend=1 00:14:21.351 --rc geninfo_all_blocks=1 00:14:21.351 --rc geninfo_unexecuted_blocks=1 00:14:21.351 00:14:21.351 ' 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:21.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.351 --rc genhtml_branch_coverage=1 00:14:21.351 --rc genhtml_function_coverage=1 00:14:21.351 --rc genhtml_legend=1 00:14:21.351 --rc geninfo_all_blocks=1 00:14:21.351 --rc geninfo_unexecuted_blocks=1 00:14:21.351 00:14:21.351 ' 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:21.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.351 --rc genhtml_branch_coverage=1 00:14:21.351 --rc genhtml_function_coverage=1 00:14:21.351 --rc genhtml_legend=1 00:14:21.351 --rc geninfo_all_blocks=1 00:14:21.351 --rc geninfo_unexecuted_blocks=1 00:14:21.351 00:14:21.351 ' 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:21.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.351 --rc genhtml_branch_coverage=1 00:14:21.351 --rc genhtml_function_coverage=1 00:14:21.351 --rc genhtml_legend=1 00:14:21.351 --rc geninfo_all_blocks=1 00:14:21.351 --rc geninfo_unexecuted_blocks=1 00:14:21.351 00:14:21.351 ' 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:21.351 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # prepare_net_devs 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # local -g is_hw=no 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # remove_target_ns 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:21.351 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@276 -- # nvmf_veth_init 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@233 -- # create_target_ns 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@234 -- # create_main_bridge 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@114 -- # delete_main_bridge 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@130 -- # return 0 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@160 -- # set_up initiator0 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:14:21.352 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@160 -- # set_up target0 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set target0 up 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@161 -- # set_up target0_br 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_ns target0 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:14:21.611 10.0.0.1 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:14:21.611 10.0.0.2 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@75 -- # set_up initiator0 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@138 -- # set_up target0_br 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:21.611 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@160 -- # set_up initiator1 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@160 -- # set_up target1 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set target1 up 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@161 -- # set_up target1_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_ns target1 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772163 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:14:21.612 10.0.0.3 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772164 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:14:21.612 10.0.0.4 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@75 -- # set_up initiator1 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:14:21.612 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:14:21.881 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@138 -- # set_up target1_br 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo initiator0 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=initiator0 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:21.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:14:21.882 00:14:21.882 --- 10.0.0.1 ping statistics --- 00:14:21.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.882 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo target0 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=target0 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:14:21.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:14:21.882 00:14:21.882 --- 10.0.0.2 ping statistics --- 00:14:21.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.882 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo initiator1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=initiator1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:14:21.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:21.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:14:21.882 00:14:21.882 --- 10.0.0.3 ping statistics --- 00:14:21.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.882 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo target1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=target1 00:14:21.882 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:14:21.883 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:21.883 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:14:21.883 00:14:21.883 --- 10.0.0.4 ping statistics --- 00:14:21.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.883 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # return 0 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo initiator0 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=initiator0 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo initiator1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=initiator1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo target0 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=target0 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo target1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=target1 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:21.883 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.884 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@324 -- # nvmfpid=67075 00:14:21.884 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:21.884 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # waitforlisten 67075 00:14:21.884 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67075 ']' 00:14:21.884 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.884 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.884 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.884 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.884 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67099 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # local digest len file key 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # local -A digests 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # digest=null 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # len=48 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # key=c4b7041afc715956dbca1a9edf5b265e8610334d40fdd143 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # mktemp -t spdk.key-null.XXX 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-null.Ddw 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@527 -- # format_dhchap_key c4b7041afc715956dbca1a9edf5b265e8610334d40fdd143 0 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # format_key DHHC-1 c4b7041afc715956dbca1a9edf5b265e8610334d40fdd143 0 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # local prefix key digest 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # key=c4b7041afc715956dbca1a9edf5b265e8610334d40fdd143 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # digest=0 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # python - 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-null.Ddw 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-null.Ddw 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Ddw 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # local digest len file key 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # local -A digests 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # digest=sha512 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # len=64 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # key=3b3ff4aebc247066f002faad756198191dcec8d956f931d9948a67851fa0ec92 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # mktemp -t spdk.key-sha512.XXX 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-sha512.s2o 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@527 -- # format_dhchap_key 3b3ff4aebc247066f002faad756198191dcec8d956f931d9948a67851fa0ec92 3 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # format_key DHHC-1 3b3ff4aebc247066f002faad756198191dcec8d956f931d9948a67851fa0ec92 3 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # local prefix key digest 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # key=3b3ff4aebc247066f002faad756198191dcec8d956f931d9948a67851fa0ec92 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # digest=3 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # python - 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-sha512.s2o 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-sha512.s2o 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.s2o 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # local digest len file key 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # local -A digests 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # digest=sha256 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # len=32 00:14:22.450 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # key=49cd398c759734847e767626ed5a7a5d 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # mktemp -t spdk.key-sha256.XXX 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-sha256.oqh 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@527 -- # format_dhchap_key 49cd398c759734847e767626ed5a7a5d 1 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # format_key DHHC-1 49cd398c759734847e767626ed5a7a5d 1 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # local prefix key digest 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # key=49cd398c759734847e767626ed5a7a5d 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # digest=1 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # python - 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-sha256.oqh 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-sha256.oqh 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.oqh 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # local digest len file key 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # local -A digests 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # digest=sha384 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # len=48 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # key=4a4c9961031c1842f9122a7406af338f0040dc29015be8e6 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # mktemp -t spdk.key-sha384.XXX 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-sha384.a2o 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@527 -- # format_dhchap_key 4a4c9961031c1842f9122a7406af338f0040dc29015be8e6 2 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # format_key DHHC-1 4a4c9961031c1842f9122a7406af338f0040dc29015be8e6 2 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # local prefix key digest 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # key=4a4c9961031c1842f9122a7406af338f0040dc29015be8e6 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # digest=2 00:14:22.451 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # python - 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-sha384.a2o 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-sha384.a2o 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.a2o 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # local digest len file key 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # local -A digests 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # digest=sha384 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # len=48 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # key=bcf32be9aac663721df72c6a213f0709ffd59e92355c9742 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # mktemp -t spdk.key-sha384.XXX 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-sha384.e4d 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@527 -- # format_dhchap_key bcf32be9aac663721df72c6a213f0709ffd59e92355c9742 2 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # format_key DHHC-1 bcf32be9aac663721df72c6a213f0709ffd59e92355c9742 2 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # local prefix key digest 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # key=bcf32be9aac663721df72c6a213f0709ffd59e92355c9742 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # digest=2 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # python - 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-sha384.e4d 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-sha384.e4d 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.e4d 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # local digest len file key 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # local -A digests 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # digest=sha256 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # len=32 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # key=0fa6cc01d5a37c0850a451d5a2f4874d 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # mktemp -t spdk.key-sha256.XXX 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-sha256.zvw 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@527 -- # format_dhchap_key 0fa6cc01d5a37c0850a451d5a2f4874d 1 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # format_key DHHC-1 0fa6cc01d5a37c0850a451d5a2f4874d 1 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # local prefix key digest 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # key=0fa6cc01d5a37c0850a451d5a2f4874d 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # digest=1 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # python - 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-sha256.zvw 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-sha256.zvw 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.zvw 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # local digest len file key 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # local -A digests 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # digest=sha512 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # len=64 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # key=b533a030bfd5bb35a4378ce35f5d7ba5ba9e16b90fef654cfdc71c720a489b4c 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # mktemp -t spdk.key-sha512.XXX 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-sha512.08s 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@527 -- # format_dhchap_key b533a030bfd5bb35a4378ce35f5d7ba5ba9e16b90fef654cfdc71c720a489b4c 3 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # format_key DHHC-1 b533a030bfd5bb35a4378ce35f5d7ba5ba9e16b90fef654cfdc71c720a489b4c 3 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # local prefix key digest 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # key=b533a030bfd5bb35a4378ce35f5d7ba5ba9e16b90fef654cfdc71c720a489b4c 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # digest=3 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # python - 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-sha512.08s 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-sha512.08s 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.08s 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67075 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67075 ']' 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:22.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:22.710 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.277 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:23.277 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:23.277 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67099 /var/tmp/host.sock 00:14:23.277 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67099 ']' 00:14:23.277 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:23.277 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:23.277 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:23.277 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.277 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.535 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:23.535 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:23.535 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:23.535 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.535 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.535 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.535 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:23.535 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ddw 00:14:23.535 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.535 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.535 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.535 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Ddw 00:14:23.535 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Ddw 00:14:23.794 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.s2o ]] 00:14:23.794 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s2o 00:14:23.794 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.794 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.794 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.794 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s2o 00:14:23.794 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s2o 00:14:24.054 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:24.054 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oqh 00:14:24.054 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.054 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.054 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.054 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.oqh 00:14:24.054 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.oqh 00:14:24.312 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.a2o ]] 00:14:24.312 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.a2o 00:14:24.312 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.312 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.312 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.312 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.a2o 00:14:24.312 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.a2o 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.e4d 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.e4d 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.e4d 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.zvw ]] 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zvw 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zvw 00:14:24.878 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zvw 00:14:25.443 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:25.443 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.08s 00:14:25.443 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.443 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.443 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.444 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.08s 00:14:25.444 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.08s 00:14:25.444 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:25.444 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:25.444 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:25.444 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.444 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:25.444 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:26.009 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:26.009 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.009 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:26.009 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:26.009 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:26.009 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.009 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.009 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.009 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.009 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.009 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.009 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.009 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.266 00:14:26.266 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.266 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.266 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.524 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.524 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.524 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.524 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.524 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.524 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.524 { 00:14:26.524 "cntlid": 1, 00:14:26.524 "qid": 0, 00:14:26.524 "state": "enabled", 00:14:26.524 "thread": "nvmf_tgt_poll_group_000", 00:14:26.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:26.524 "listen_address": { 00:14:26.524 "trtype": "TCP", 00:14:26.524 "adrfam": "IPv4", 00:14:26.524 "traddr": "10.0.0.2", 00:14:26.524 "trsvcid": "4420" 00:14:26.524 }, 00:14:26.524 "peer_address": { 00:14:26.524 "trtype": "TCP", 00:14:26.524 "adrfam": "IPv4", 00:14:26.524 "traddr": "10.0.0.1", 00:14:26.524 "trsvcid": "53760" 00:14:26.524 }, 00:14:26.524 "auth": { 00:14:26.524 "state": "completed", 00:14:26.524 "digest": "sha256", 00:14:26.524 "dhgroup": "null" 00:14:26.524 } 00:14:26.524 } 00:14:26.524 ]' 00:14:26.524 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.782 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.782 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.782 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:26.782 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.782 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.782 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.782 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.040 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:14:27.040 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.346 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.346 00:14:32.346 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.346 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.346 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.604 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.604 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.604 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.604 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.604 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.604 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.604 { 00:14:32.604 "cntlid": 3, 00:14:32.604 "qid": 0, 00:14:32.604 "state": "enabled", 00:14:32.604 "thread": "nvmf_tgt_poll_group_000", 00:14:32.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:32.604 "listen_address": { 00:14:32.604 "trtype": "TCP", 00:14:32.604 "adrfam": "IPv4", 00:14:32.604 "traddr": "10.0.0.2", 00:14:32.604 "trsvcid": "4420" 00:14:32.604 }, 00:14:32.604 "peer_address": { 00:14:32.604 "trtype": "TCP", 00:14:32.604 "adrfam": "IPv4", 00:14:32.604 "traddr": "10.0.0.1", 00:14:32.604 "trsvcid": "53786" 00:14:32.604 }, 00:14:32.604 "auth": { 00:14:32.604 "state": "completed", 00:14:32.604 "digest": "sha256", 00:14:32.604 "dhgroup": "null" 00:14:32.604 } 00:14:32.604 } 00:14:32.604 ]' 00:14:32.604 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.604 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.604 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.604 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:32.861 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.861 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.861 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.861 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.119 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:14:33.119 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.054 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.619 00:14:34.619 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.619 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.619 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.876 { 00:14:34.876 "cntlid": 5, 00:14:34.876 "qid": 0, 00:14:34.876 "state": "enabled", 00:14:34.876 "thread": "nvmf_tgt_poll_group_000", 00:14:34.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:34.876 "listen_address": { 00:14:34.876 "trtype": "TCP", 00:14:34.876 "adrfam": "IPv4", 00:14:34.876 "traddr": "10.0.0.2", 00:14:34.876 "trsvcid": "4420" 00:14:34.876 }, 00:14:34.876 "peer_address": { 00:14:34.876 "trtype": "TCP", 00:14:34.876 "adrfam": "IPv4", 00:14:34.876 "traddr": "10.0.0.1", 00:14:34.876 "trsvcid": "53818" 00:14:34.876 }, 00:14:34.876 "auth": { 00:14:34.876 "state": "completed", 00:14:34.876 "digest": "sha256", 00:14:34.876 "dhgroup": "null" 00:14:34.876 } 00:14:34.876 } 00:14:34.876 ]' 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.876 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.439 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:14:35.439 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:14:36.032 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.032 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:36.032 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.032 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.032 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.032 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.032 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.032 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.290 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:36.291 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.291 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:36.291 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:36.291 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:36.291 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.291 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:14:36.291 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.291 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.291 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.291 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:36.291 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.291 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.548 00:14:36.548 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.548 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.548 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.806 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.806 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.806 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.806 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.063 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.063 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.063 { 00:14:37.063 "cntlid": 7, 00:14:37.063 "qid": 0, 00:14:37.063 "state": "enabled", 00:14:37.063 "thread": "nvmf_tgt_poll_group_000", 00:14:37.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:37.063 "listen_address": { 00:14:37.063 "trtype": "TCP", 00:14:37.063 "adrfam": "IPv4", 00:14:37.063 "traddr": "10.0.0.2", 00:14:37.063 "trsvcid": "4420" 00:14:37.063 }, 00:14:37.063 "peer_address": { 00:14:37.063 "trtype": "TCP", 00:14:37.063 "adrfam": "IPv4", 00:14:37.063 "traddr": "10.0.0.1", 00:14:37.063 "trsvcid": "46376" 00:14:37.063 }, 00:14:37.063 "auth": { 00:14:37.063 "state": "completed", 00:14:37.063 "digest": "sha256", 00:14:37.063 "dhgroup": "null" 00:14:37.063 } 00:14:37.063 } 00:14:37.063 ]' 00:14:37.063 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.063 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.063 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.063 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:37.063 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.063 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.063 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.063 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.321 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:14:37.321 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:14:38.253 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.253 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:38.253 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.253 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.253 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.253 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:38.253 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.253 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:38.253 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:38.510 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:38.510 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.510 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:38.510 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:38.510 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:38.510 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.510 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.510 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.510 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.510 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.510 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.510 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.510 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.768 00:14:38.768 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.768 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.768 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.025 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.025 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.025 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.025 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.025 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.025 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.025 { 00:14:39.025 "cntlid": 9, 00:14:39.025 "qid": 0, 00:14:39.025 "state": "enabled", 00:14:39.025 "thread": "nvmf_tgt_poll_group_000", 00:14:39.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:39.025 "listen_address": { 00:14:39.025 "trtype": "TCP", 00:14:39.025 "adrfam": "IPv4", 00:14:39.025 "traddr": "10.0.0.2", 00:14:39.025 "trsvcid": "4420" 00:14:39.025 }, 00:14:39.025 "peer_address": { 00:14:39.025 "trtype": "TCP", 00:14:39.025 "adrfam": "IPv4", 00:14:39.025 "traddr": "10.0.0.1", 00:14:39.025 "trsvcid": "46408" 00:14:39.025 }, 00:14:39.025 "auth": { 00:14:39.025 "state": "completed", 00:14:39.025 "digest": "sha256", 00:14:39.025 "dhgroup": "ffdhe2048" 00:14:39.025 } 00:14:39.025 } 00:14:39.025 ]' 00:14:39.025 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.025 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.025 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.282 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:39.282 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.282 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.282 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.282 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.539 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:14:39.539 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:14:40.104 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.104 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:40.104 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.104 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.104 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.104 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.104 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:40.104 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:40.670 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:40.670 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.670 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:40.670 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:40.670 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:40.670 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.670 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.670 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.670 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.670 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.670 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.670 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.670 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.926 00:14:40.926 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.926 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.926 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.182 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.182 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.182 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.182 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.182 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.182 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.182 { 00:14:41.182 "cntlid": 11, 00:14:41.182 "qid": 0, 00:14:41.182 "state": "enabled", 00:14:41.182 "thread": "nvmf_tgt_poll_group_000", 00:14:41.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:41.182 "listen_address": { 00:14:41.182 "trtype": "TCP", 00:14:41.182 "adrfam": "IPv4", 00:14:41.182 "traddr": "10.0.0.2", 00:14:41.182 "trsvcid": "4420" 00:14:41.182 }, 00:14:41.182 "peer_address": { 00:14:41.182 "trtype": "TCP", 00:14:41.182 "adrfam": "IPv4", 00:14:41.182 "traddr": "10.0.0.1", 00:14:41.182 "trsvcid": "46436" 00:14:41.182 }, 00:14:41.182 "auth": { 00:14:41.182 "state": "completed", 00:14:41.182 "digest": "sha256", 00:14:41.182 "dhgroup": "ffdhe2048" 00:14:41.182 } 00:14:41.182 } 00:14:41.182 ]' 00:14:41.182 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.182 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.182 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.182 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:41.182 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.439 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.439 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.439 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.697 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:14:41.697 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:14:42.264 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.264 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:42.264 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.264 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.264 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.264 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.264 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:42.264 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:42.524 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:42.524 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.524 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:42.524 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:42.524 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:42.524 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.524 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.524 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.524 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.524 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.524 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.524 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.524 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.089 00:14:43.089 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.089 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.089 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.347 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.347 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.347 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.347 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.347 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.347 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.347 { 00:14:43.347 "cntlid": 13, 00:14:43.347 "qid": 0, 00:14:43.347 "state": "enabled", 00:14:43.347 "thread": "nvmf_tgt_poll_group_000", 00:14:43.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:43.347 "listen_address": { 00:14:43.347 "trtype": "TCP", 00:14:43.347 "adrfam": "IPv4", 00:14:43.347 "traddr": "10.0.0.2", 00:14:43.347 "trsvcid": "4420" 00:14:43.347 }, 00:14:43.347 "peer_address": { 00:14:43.347 "trtype": "TCP", 00:14:43.347 "adrfam": "IPv4", 00:14:43.347 "traddr": "10.0.0.1", 00:14:43.347 "trsvcid": "46464" 00:14:43.347 }, 00:14:43.347 "auth": { 00:14:43.347 "state": "completed", 00:14:43.347 "digest": "sha256", 00:14:43.347 "dhgroup": "ffdhe2048" 00:14:43.347 } 00:14:43.347 } 00:14:43.347 ]' 00:14:43.347 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.347 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.347 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.347 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:43.347 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.347 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.347 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.347 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.605 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:14:43.605 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:14:44.537 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.537 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:44.537 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.537 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.537 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.537 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.537 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:44.537 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:44.796 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:44.796 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.796 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:44.796 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:44.796 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:44.796 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.796 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:14:44.796 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.796 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.796 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.796 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:44.796 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.796 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:45.054 00:14:45.054 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.054 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.054 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.620 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.621 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.621 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.621 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.621 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.621 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.621 { 00:14:45.621 "cntlid": 15, 00:14:45.621 "qid": 0, 00:14:45.621 "state": "enabled", 00:14:45.621 "thread": "nvmf_tgt_poll_group_000", 00:14:45.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:45.621 "listen_address": { 00:14:45.621 "trtype": "TCP", 00:14:45.621 "adrfam": "IPv4", 00:14:45.621 "traddr": "10.0.0.2", 00:14:45.621 "trsvcid": "4420" 00:14:45.621 }, 00:14:45.621 "peer_address": { 00:14:45.621 "trtype": "TCP", 00:14:45.621 "adrfam": "IPv4", 00:14:45.621 "traddr": "10.0.0.1", 00:14:45.621 "trsvcid": "46498" 00:14:45.621 }, 00:14:45.621 "auth": { 00:14:45.621 "state": "completed", 00:14:45.621 "digest": "sha256", 00:14:45.621 "dhgroup": "ffdhe2048" 00:14:45.621 } 00:14:45.621 } 00:14:45.621 ]' 00:14:45.621 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.621 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.621 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.621 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:45.621 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.621 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.621 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.621 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.879 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:14:45.879 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.810 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.375 00:14:47.375 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.375 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.376 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.634 { 00:14:47.634 "cntlid": 17, 00:14:47.634 "qid": 0, 00:14:47.634 "state": "enabled", 00:14:47.634 "thread": "nvmf_tgt_poll_group_000", 00:14:47.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:47.634 "listen_address": { 00:14:47.634 "trtype": "TCP", 00:14:47.634 "adrfam": "IPv4", 00:14:47.634 "traddr": "10.0.0.2", 00:14:47.634 "trsvcid": "4420" 00:14:47.634 }, 00:14:47.634 "peer_address": { 00:14:47.634 "trtype": "TCP", 00:14:47.634 "adrfam": "IPv4", 00:14:47.634 "traddr": "10.0.0.1", 00:14:47.634 "trsvcid": "55798" 00:14:47.634 }, 00:14:47.634 "auth": { 00:14:47.634 "state": "completed", 00:14:47.634 "digest": "sha256", 00:14:47.634 "dhgroup": "ffdhe3072" 00:14:47.634 } 00:14:47.634 } 00:14:47.634 ]' 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.634 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.892 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:14:47.892 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:14:48.826 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.826 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:48.826 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.826 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.826 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.826 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.826 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:48.826 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:49.085 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:49.085 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.085 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:49.085 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:49.085 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:49.085 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.085 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.085 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.085 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.085 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.085 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.085 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.085 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.343 00:14:49.343 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.343 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.343 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.909 { 00:14:49.909 "cntlid": 19, 00:14:49.909 "qid": 0, 00:14:49.909 "state": "enabled", 00:14:49.909 "thread": "nvmf_tgt_poll_group_000", 00:14:49.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:49.909 "listen_address": { 00:14:49.909 "trtype": "TCP", 00:14:49.909 "adrfam": "IPv4", 00:14:49.909 "traddr": "10.0.0.2", 00:14:49.909 "trsvcid": "4420" 00:14:49.909 }, 00:14:49.909 "peer_address": { 00:14:49.909 "trtype": "TCP", 00:14:49.909 "adrfam": "IPv4", 00:14:49.909 "traddr": "10.0.0.1", 00:14:49.909 "trsvcid": "55838" 00:14:49.909 }, 00:14:49.909 "auth": { 00:14:49.909 "state": "completed", 00:14:49.909 "digest": "sha256", 00:14:49.909 "dhgroup": "ffdhe3072" 00:14:49.909 } 00:14:49.909 } 00:14:49.909 ]' 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.909 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.167 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:14:50.168 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:14:51.102 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.102 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:51.102 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.102 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.102 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.102 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.102 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:51.102 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:51.361 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:51.361 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.361 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:51.361 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:51.361 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:51.361 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.361 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.361 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.361 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.361 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.361 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.361 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.361 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.619 00:14:51.619 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.619 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.619 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.878 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.878 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.878 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.878 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.878 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.878 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.878 { 00:14:51.878 "cntlid": 21, 00:14:51.878 "qid": 0, 00:14:51.878 "state": "enabled", 00:14:51.878 "thread": "nvmf_tgt_poll_group_000", 00:14:51.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:51.878 "listen_address": { 00:14:51.878 "trtype": "TCP", 00:14:51.878 "adrfam": "IPv4", 00:14:51.878 "traddr": "10.0.0.2", 00:14:51.878 "trsvcid": "4420" 00:14:51.878 }, 00:14:51.878 "peer_address": { 00:14:51.878 "trtype": "TCP", 00:14:51.878 "adrfam": "IPv4", 00:14:51.878 "traddr": "10.0.0.1", 00:14:51.878 "trsvcid": "55878" 00:14:51.878 }, 00:14:51.878 "auth": { 00:14:51.878 "state": "completed", 00:14:51.878 "digest": "sha256", 00:14:51.878 "dhgroup": "ffdhe3072" 00:14:51.878 } 00:14:51.878 } 00:14:51.878 ]' 00:14:51.878 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.135 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.135 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.135 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:52.135 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.135 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.135 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.135 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.393 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:14:52.393 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:14:53.332 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.332 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:53.332 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.332 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.332 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.332 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.332 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:53.332 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:53.590 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:53.591 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.591 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.591 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:53.591 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:53.591 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.591 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:14:53.591 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.591 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.591 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.591 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:53.591 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.591 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.848 00:14:53.848 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.848 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.848 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.107 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.107 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.107 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.107 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.107 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.107 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.107 { 00:14:54.107 "cntlid": 23, 00:14:54.107 "qid": 0, 00:14:54.107 "state": "enabled", 00:14:54.107 "thread": "nvmf_tgt_poll_group_000", 00:14:54.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:54.107 "listen_address": { 00:14:54.107 "trtype": "TCP", 00:14:54.107 "adrfam": "IPv4", 00:14:54.107 "traddr": "10.0.0.2", 00:14:54.107 "trsvcid": "4420" 00:14:54.107 }, 00:14:54.107 "peer_address": { 00:14:54.107 "trtype": "TCP", 00:14:54.107 "adrfam": "IPv4", 00:14:54.107 "traddr": "10.0.0.1", 00:14:54.107 "trsvcid": "55908" 00:14:54.107 }, 00:14:54.107 "auth": { 00:14:54.107 "state": "completed", 00:14:54.107 "digest": "sha256", 00:14:54.107 "dhgroup": "ffdhe3072" 00:14:54.107 } 00:14:54.107 } 00:14:54.107 ]' 00:14:54.107 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.107 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.107 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.365 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:54.365 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.365 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.365 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.365 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.622 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:14:54.622 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:14:55.193 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.193 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:55.193 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.193 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.193 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.193 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:55.193 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.193 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:55.193 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:55.760 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:55.760 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.760 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:55.760 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:55.760 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:55.760 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.760 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.760 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.760 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.760 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.760 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.760 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.760 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.018 00:14:56.018 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.018 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.018 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.276 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.276 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.276 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.276 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.276 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.276 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.276 { 00:14:56.276 "cntlid": 25, 00:14:56.276 "qid": 0, 00:14:56.276 "state": "enabled", 00:14:56.276 "thread": "nvmf_tgt_poll_group_000", 00:14:56.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:56.276 "listen_address": { 00:14:56.276 "trtype": "TCP", 00:14:56.276 "adrfam": "IPv4", 00:14:56.276 "traddr": "10.0.0.2", 00:14:56.276 "trsvcid": "4420" 00:14:56.276 }, 00:14:56.276 "peer_address": { 00:14:56.276 "trtype": "TCP", 00:14:56.276 "adrfam": "IPv4", 00:14:56.276 "traddr": "10.0.0.1", 00:14:56.276 "trsvcid": "55940" 00:14:56.276 }, 00:14:56.276 "auth": { 00:14:56.276 "state": "completed", 00:14:56.276 "digest": "sha256", 00:14:56.276 "dhgroup": "ffdhe4096" 00:14:56.276 } 00:14:56.276 } 00:14:56.276 ]' 00:14:56.276 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.276 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.276 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.534 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:56.535 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.535 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.535 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.535 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.793 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:14:56.793 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:14:57.360 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.360 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:57.360 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.360 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.360 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.360 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.360 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:57.360 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:57.618 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:57.618 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.618 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:57.618 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:57.618 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:57.618 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.618 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.618 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.618 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.876 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.876 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.876 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.876 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.135 00:14:58.135 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.135 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.135 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.392 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.392 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.392 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.392 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.392 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.392 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.392 { 00:14:58.392 "cntlid": 27, 00:14:58.392 "qid": 0, 00:14:58.392 "state": "enabled", 00:14:58.392 "thread": "nvmf_tgt_poll_group_000", 00:14:58.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:14:58.392 "listen_address": { 00:14:58.392 "trtype": "TCP", 00:14:58.392 "adrfam": "IPv4", 00:14:58.392 "traddr": "10.0.0.2", 00:14:58.392 "trsvcid": "4420" 00:14:58.392 }, 00:14:58.392 "peer_address": { 00:14:58.392 "trtype": "TCP", 00:14:58.392 "adrfam": "IPv4", 00:14:58.392 "traddr": "10.0.0.1", 00:14:58.392 "trsvcid": "40470" 00:14:58.392 }, 00:14:58.392 "auth": { 00:14:58.392 "state": "completed", 00:14:58.392 "digest": "sha256", 00:14:58.392 "dhgroup": "ffdhe4096" 00:14:58.392 } 00:14:58.392 } 00:14:58.392 ]' 00:14:58.392 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.392 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.392 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.651 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:58.651 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.651 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.651 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.651 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.910 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:14:58.910 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:14:59.478 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.478 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:14:59.478 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.478 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.478 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.478 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.478 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:59.478 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:00.044 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:00.044 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.044 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:00.044 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:00.044 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:00.044 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.044 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.044 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.044 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.044 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.044 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.044 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.044 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.303 00:15:00.303 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.303 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.303 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.561 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.561 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.561 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.561 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.562 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.562 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.562 { 00:15:00.562 "cntlid": 29, 00:15:00.562 "qid": 0, 00:15:00.562 "state": "enabled", 00:15:00.562 "thread": "nvmf_tgt_poll_group_000", 00:15:00.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:00.562 "listen_address": { 00:15:00.562 "trtype": "TCP", 00:15:00.562 "adrfam": "IPv4", 00:15:00.562 "traddr": "10.0.0.2", 00:15:00.562 "trsvcid": "4420" 00:15:00.562 }, 00:15:00.562 "peer_address": { 00:15:00.562 "trtype": "TCP", 00:15:00.562 "adrfam": "IPv4", 00:15:00.562 "traddr": "10.0.0.1", 00:15:00.562 "trsvcid": "40490" 00:15:00.562 }, 00:15:00.562 "auth": { 00:15:00.562 "state": "completed", 00:15:00.562 "digest": "sha256", 00:15:00.562 "dhgroup": "ffdhe4096" 00:15:00.562 } 00:15:00.562 } 00:15:00.562 ]' 00:15:00.562 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.562 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.562 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.820 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:00.820 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.820 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.820 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.820 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.078 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:01.078 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:01.649 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.649 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:01.649 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.649 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.649 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.649 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.649 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:01.649 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:01.908 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:01.908 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.908 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.908 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:01.908 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:01.908 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.908 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:15:01.908 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.908 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.908 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.908 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:01.908 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.908 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.474 00:15:02.474 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.474 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.474 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.734 { 00:15:02.734 "cntlid": 31, 00:15:02.734 "qid": 0, 00:15:02.734 "state": "enabled", 00:15:02.734 "thread": "nvmf_tgt_poll_group_000", 00:15:02.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:02.734 "listen_address": { 00:15:02.734 "trtype": "TCP", 00:15:02.734 "adrfam": "IPv4", 00:15:02.734 "traddr": "10.0.0.2", 00:15:02.734 "trsvcid": "4420" 00:15:02.734 }, 00:15:02.734 "peer_address": { 00:15:02.734 "trtype": "TCP", 00:15:02.734 "adrfam": "IPv4", 00:15:02.734 "traddr": "10.0.0.1", 00:15:02.734 "trsvcid": "40530" 00:15:02.734 }, 00:15:02.734 "auth": { 00:15:02.734 "state": "completed", 00:15:02.734 "digest": "sha256", 00:15:02.734 "dhgroup": "ffdhe4096" 00:15:02.734 } 00:15:02.734 } 00:15:02.734 ]' 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.734 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.299 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:15:03.299 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:15:03.865 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.865 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:03.865 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.865 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.865 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.865 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:03.865 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.865 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:03.865 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:04.122 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:04.122 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.122 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.122 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:04.122 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:04.122 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.122 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.122 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.122 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.122 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.122 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.122 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.122 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.686 00:15:04.686 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.686 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.686 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.944 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.944 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.944 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.944 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.944 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.944 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.944 { 00:15:04.944 "cntlid": 33, 00:15:04.944 "qid": 0, 00:15:04.944 "state": "enabled", 00:15:04.944 "thread": "nvmf_tgt_poll_group_000", 00:15:04.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:04.944 "listen_address": { 00:15:04.944 "trtype": "TCP", 00:15:04.944 "adrfam": "IPv4", 00:15:04.944 "traddr": "10.0.0.2", 00:15:04.944 "trsvcid": "4420" 00:15:04.944 }, 00:15:04.944 "peer_address": { 00:15:04.944 "trtype": "TCP", 00:15:04.944 "adrfam": "IPv4", 00:15:04.944 "traddr": "10.0.0.1", 00:15:04.944 "trsvcid": "40560" 00:15:04.944 }, 00:15:04.944 "auth": { 00:15:04.944 "state": "completed", 00:15:04.944 "digest": "sha256", 00:15:04.944 "dhgroup": "ffdhe6144" 00:15:04.944 } 00:15:04.944 } 00:15:04.944 ]' 00:15:04.944 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.944 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.944 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.202 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:05.202 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.202 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.202 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.202 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.460 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:15:05.460 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:15:06.070 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.070 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:06.070 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.070 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.070 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.070 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.070 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:06.071 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:06.329 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:06.329 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.329 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.329 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:06.329 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:06.329 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.329 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.329 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.329 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.329 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.329 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.329 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.329 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.897 00:15:06.897 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.897 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.897 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.155 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.155 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.155 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.155 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.155 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.155 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.155 { 00:15:07.155 "cntlid": 35, 00:15:07.155 "qid": 0, 00:15:07.155 "state": "enabled", 00:15:07.155 "thread": "nvmf_tgt_poll_group_000", 00:15:07.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:07.155 "listen_address": { 00:15:07.155 "trtype": "TCP", 00:15:07.155 "adrfam": "IPv4", 00:15:07.155 "traddr": "10.0.0.2", 00:15:07.155 "trsvcid": "4420" 00:15:07.155 }, 00:15:07.155 "peer_address": { 00:15:07.155 "trtype": "TCP", 00:15:07.155 "adrfam": "IPv4", 00:15:07.155 "traddr": "10.0.0.1", 00:15:07.155 "trsvcid": "43198" 00:15:07.155 }, 00:15:07.155 "auth": { 00:15:07.155 "state": "completed", 00:15:07.155 "digest": "sha256", 00:15:07.155 "dhgroup": "ffdhe6144" 00:15:07.155 } 00:15:07.155 } 00:15:07.155 ]' 00:15:07.155 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.412 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.412 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.412 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:07.412 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.412 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.412 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.412 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.670 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:15:07.670 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:15:08.644 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.644 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:08.644 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.644 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.644 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.644 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.644 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:08.644 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:08.925 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:08.925 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.925 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.925 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:08.925 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:08.925 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.925 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.925 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.925 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.925 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.925 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.925 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.925 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.183 00:15:09.442 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.442 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.442 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.700 { 00:15:09.700 "cntlid": 37, 00:15:09.700 "qid": 0, 00:15:09.700 "state": "enabled", 00:15:09.700 "thread": "nvmf_tgt_poll_group_000", 00:15:09.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:09.700 "listen_address": { 00:15:09.700 "trtype": "TCP", 00:15:09.700 "adrfam": "IPv4", 00:15:09.700 "traddr": "10.0.0.2", 00:15:09.700 "trsvcid": "4420" 00:15:09.700 }, 00:15:09.700 "peer_address": { 00:15:09.700 "trtype": "TCP", 00:15:09.700 "adrfam": "IPv4", 00:15:09.700 "traddr": "10.0.0.1", 00:15:09.700 "trsvcid": "43220" 00:15:09.700 }, 00:15:09.700 "auth": { 00:15:09.700 "state": "completed", 00:15:09.700 "digest": "sha256", 00:15:09.700 "dhgroup": "ffdhe6144" 00:15:09.700 } 00:15:09.700 } 00:15:09.700 ]' 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.700 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.265 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:10.265 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:10.832 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.832 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:10.832 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.832 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.832 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.832 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.832 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:10.832 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:11.397 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:11.397 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.397 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.397 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:11.398 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:11.398 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.398 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:15:11.398 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.398 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.398 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.398 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:11.398 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.398 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.964 00:15:11.964 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.964 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.964 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.222 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.222 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.222 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.222 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.222 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.222 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.222 { 00:15:12.222 "cntlid": 39, 00:15:12.222 "qid": 0, 00:15:12.222 "state": "enabled", 00:15:12.222 "thread": "nvmf_tgt_poll_group_000", 00:15:12.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:12.222 "listen_address": { 00:15:12.222 "trtype": "TCP", 00:15:12.222 "adrfam": "IPv4", 00:15:12.223 "traddr": "10.0.0.2", 00:15:12.223 "trsvcid": "4420" 00:15:12.223 }, 00:15:12.223 "peer_address": { 00:15:12.223 "trtype": "TCP", 00:15:12.223 "adrfam": "IPv4", 00:15:12.223 "traddr": "10.0.0.1", 00:15:12.223 "trsvcid": "43240" 00:15:12.223 }, 00:15:12.223 "auth": { 00:15:12.223 "state": "completed", 00:15:12.223 "digest": "sha256", 00:15:12.223 "dhgroup": "ffdhe6144" 00:15:12.223 } 00:15:12.223 } 00:15:12.223 ]' 00:15:12.223 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.223 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.223 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.223 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:12.223 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.223 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.223 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.223 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.789 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:15:12.789 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:15:13.355 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.355 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:13.355 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.355 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.355 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.355 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.355 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.355 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:13.355 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:13.614 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:13.614 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.614 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.614 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:13.614 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:13.614 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.615 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.615 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.615 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.615 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.615 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.615 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.615 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.554 00:15:14.554 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.554 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.554 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.814 { 00:15:14.814 "cntlid": 41, 00:15:14.814 "qid": 0, 00:15:14.814 "state": "enabled", 00:15:14.814 "thread": "nvmf_tgt_poll_group_000", 00:15:14.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:14.814 "listen_address": { 00:15:14.814 "trtype": "TCP", 00:15:14.814 "adrfam": "IPv4", 00:15:14.814 "traddr": "10.0.0.2", 00:15:14.814 "trsvcid": "4420" 00:15:14.814 }, 00:15:14.814 "peer_address": { 00:15:14.814 "trtype": "TCP", 00:15:14.814 "adrfam": "IPv4", 00:15:14.814 "traddr": "10.0.0.1", 00:15:14.814 "trsvcid": "43272" 00:15:14.814 }, 00:15:14.814 "auth": { 00:15:14.814 "state": "completed", 00:15:14.814 "digest": "sha256", 00:15:14.814 "dhgroup": "ffdhe8192" 00:15:14.814 } 00:15:14.814 } 00:15:14.814 ]' 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.814 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.380 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:15:15.380 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:15:15.947 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.947 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:15.947 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.947 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.947 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.947 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.947 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:15.947 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:16.205 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:16.205 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.205 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.205 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:16.205 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:16.205 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.206 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.206 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.206 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.206 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.206 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.206 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.206 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.140 00:15:17.140 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.140 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.140 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.398 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.398 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.398 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.398 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.398 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.398 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.398 { 00:15:17.398 "cntlid": 43, 00:15:17.398 "qid": 0, 00:15:17.398 "state": "enabled", 00:15:17.398 "thread": "nvmf_tgt_poll_group_000", 00:15:17.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:17.398 "listen_address": { 00:15:17.398 "trtype": "TCP", 00:15:17.398 "adrfam": "IPv4", 00:15:17.398 "traddr": "10.0.0.2", 00:15:17.398 "trsvcid": "4420" 00:15:17.398 }, 00:15:17.398 "peer_address": { 00:15:17.398 "trtype": "TCP", 00:15:17.398 "adrfam": "IPv4", 00:15:17.398 "traddr": "10.0.0.1", 00:15:17.398 "trsvcid": "36512" 00:15:17.398 }, 00:15:17.398 "auth": { 00:15:17.398 "state": "completed", 00:15:17.398 "digest": "sha256", 00:15:17.398 "dhgroup": "ffdhe8192" 00:15:17.398 } 00:15:17.398 } 00:15:17.398 ]' 00:15:17.398 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.398 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.398 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.398 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:17.398 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.655 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.655 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.655 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.913 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:15:17.913 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:15:18.480 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.480 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:18.480 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.480 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.480 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.480 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.480 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:18.480 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:19.047 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:19.047 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.047 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.047 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:19.047 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.047 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.047 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.047 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.047 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.047 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.047 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.047 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.047 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.614 00:15:19.614 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.614 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.614 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.873 { 00:15:19.873 "cntlid": 45, 00:15:19.873 "qid": 0, 00:15:19.873 "state": "enabled", 00:15:19.873 "thread": "nvmf_tgt_poll_group_000", 00:15:19.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:19.873 "listen_address": { 00:15:19.873 "trtype": "TCP", 00:15:19.873 "adrfam": "IPv4", 00:15:19.873 "traddr": "10.0.0.2", 00:15:19.873 "trsvcid": "4420" 00:15:19.873 }, 00:15:19.873 "peer_address": { 00:15:19.873 "trtype": "TCP", 00:15:19.873 "adrfam": "IPv4", 00:15:19.873 "traddr": "10.0.0.1", 00:15:19.873 "trsvcid": "36534" 00:15:19.873 }, 00:15:19.873 "auth": { 00:15:19.873 "state": "completed", 00:15:19.873 "digest": "sha256", 00:15:19.873 "dhgroup": "ffdhe8192" 00:15:19.873 } 00:15:19.873 } 00:15:19.873 ]' 00:15:19.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.131 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:20.131 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.131 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.131 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.131 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.389 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:20.389 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:21.325 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.325 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:21.325 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.325 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.325 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.325 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.325 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:21.325 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:21.583 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:21.583 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.583 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.583 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:21.583 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:21.583 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.584 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:15:21.584 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.584 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.584 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.584 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:21.584 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.584 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.150 00:15:22.150 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.150 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.150 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.408 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.408 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.408 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.408 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.408 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.408 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.409 { 00:15:22.409 "cntlid": 47, 00:15:22.409 "qid": 0, 00:15:22.409 "state": "enabled", 00:15:22.409 "thread": "nvmf_tgt_poll_group_000", 00:15:22.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:22.409 "listen_address": { 00:15:22.409 "trtype": "TCP", 00:15:22.409 "adrfam": "IPv4", 00:15:22.409 "traddr": "10.0.0.2", 00:15:22.409 "trsvcid": "4420" 00:15:22.409 }, 00:15:22.409 "peer_address": { 00:15:22.409 "trtype": "TCP", 00:15:22.409 "adrfam": "IPv4", 00:15:22.409 "traddr": "10.0.0.1", 00:15:22.409 "trsvcid": "36566" 00:15:22.409 }, 00:15:22.409 "auth": { 00:15:22.409 "state": "completed", 00:15:22.409 "digest": "sha256", 00:15:22.409 "dhgroup": "ffdhe8192" 00:15:22.409 } 00:15:22.409 } 00:15:22.409 ]' 00:15:22.409 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.667 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.667 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.667 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:22.667 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.667 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.667 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.667 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.925 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:15:22.925 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:15:23.860 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.860 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:23.860 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.860 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.860 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.860 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:23.860 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.860 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.860 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:23.860 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:24.119 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:24.119 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.119 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:24.119 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:24.119 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:24.119 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.119 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.119 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.119 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.119 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.119 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.119 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.119 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.378 00:15:24.378 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.378 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.378 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.639 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.639 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.639 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.639 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.639 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.639 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.639 { 00:15:24.639 "cntlid": 49, 00:15:24.639 "qid": 0, 00:15:24.639 "state": "enabled", 00:15:24.639 "thread": "nvmf_tgt_poll_group_000", 00:15:24.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:24.639 "listen_address": { 00:15:24.639 "trtype": "TCP", 00:15:24.639 "adrfam": "IPv4", 00:15:24.639 "traddr": "10.0.0.2", 00:15:24.639 "trsvcid": "4420" 00:15:24.639 }, 00:15:24.639 "peer_address": { 00:15:24.639 "trtype": "TCP", 00:15:24.639 "adrfam": "IPv4", 00:15:24.639 "traddr": "10.0.0.1", 00:15:24.639 "trsvcid": "36584" 00:15:24.639 }, 00:15:24.639 "auth": { 00:15:24.639 "state": "completed", 00:15:24.639 "digest": "sha384", 00:15:24.639 "dhgroup": "null" 00:15:24.639 } 00:15:24.639 } 00:15:24.639 ]' 00:15:24.639 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.897 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.897 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.897 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:24.898 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.898 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.898 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.898 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.168 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:15:25.168 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:15:26.109 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.109 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:26.109 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.109 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.109 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.109 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.109 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:26.110 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:26.371 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:26.371 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.371 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:26.371 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:26.371 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:26.371 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.371 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.371 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.371 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.371 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.371 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.371 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.371 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.631 00:15:26.631 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.631 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.631 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.889 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.889 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.889 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.889 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.889 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.889 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.889 { 00:15:26.889 "cntlid": 51, 00:15:26.889 "qid": 0, 00:15:26.889 "state": "enabled", 00:15:26.889 "thread": "nvmf_tgt_poll_group_000", 00:15:26.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:26.889 "listen_address": { 00:15:26.889 "trtype": "TCP", 00:15:26.889 "adrfam": "IPv4", 00:15:26.889 "traddr": "10.0.0.2", 00:15:26.889 "trsvcid": "4420" 00:15:26.889 }, 00:15:26.889 "peer_address": { 00:15:26.889 "trtype": "TCP", 00:15:26.889 "adrfam": "IPv4", 00:15:26.889 "traddr": "10.0.0.1", 00:15:26.889 "trsvcid": "35030" 00:15:26.889 }, 00:15:26.889 "auth": { 00:15:26.889 "state": "completed", 00:15:26.889 "digest": "sha384", 00:15:26.889 "dhgroup": "null" 00:15:26.889 } 00:15:26.889 } 00:15:26.889 ]' 00:15:26.889 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.889 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.889 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.148 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:27.148 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.148 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.148 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.148 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.406 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:15:27.406 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:15:28.350 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.350 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:28.350 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.350 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.350 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.350 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.350 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:28.350 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:28.350 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:28.350 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.350 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:28.350 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:28.350 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:28.350 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.350 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.350 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.350 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.350 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.350 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.350 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.350 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.916 00:15:28.916 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.916 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.916 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.175 { 00:15:29.175 "cntlid": 53, 00:15:29.175 "qid": 0, 00:15:29.175 "state": "enabled", 00:15:29.175 "thread": "nvmf_tgt_poll_group_000", 00:15:29.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:29.175 "listen_address": { 00:15:29.175 "trtype": "TCP", 00:15:29.175 "adrfam": "IPv4", 00:15:29.175 "traddr": "10.0.0.2", 00:15:29.175 "trsvcid": "4420" 00:15:29.175 }, 00:15:29.175 "peer_address": { 00:15:29.175 "trtype": "TCP", 00:15:29.175 "adrfam": "IPv4", 00:15:29.175 "traddr": "10.0.0.1", 00:15:29.175 "trsvcid": "35060" 00:15:29.175 }, 00:15:29.175 "auth": { 00:15:29.175 "state": "completed", 00:15:29.175 "digest": "sha384", 00:15:29.175 "dhgroup": "null" 00:15:29.175 } 00:15:29.175 } 00:15:29.175 ]' 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.175 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.741 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:29.741 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:30.307 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.307 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:30.307 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.307 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.307 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.307 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.307 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.308 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.565 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:30.566 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.566 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:30.566 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:30.566 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:30.566 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.566 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:15:30.566 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.566 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.566 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.566 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:30.566 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.566 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.131 00:15:31.131 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.131 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.131 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.388 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.389 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.389 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.389 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.389 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.389 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.389 { 00:15:31.389 "cntlid": 55, 00:15:31.389 "qid": 0, 00:15:31.389 "state": "enabled", 00:15:31.389 "thread": "nvmf_tgt_poll_group_000", 00:15:31.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:31.389 "listen_address": { 00:15:31.389 "trtype": "TCP", 00:15:31.389 "adrfam": "IPv4", 00:15:31.389 "traddr": "10.0.0.2", 00:15:31.389 "trsvcid": "4420" 00:15:31.389 }, 00:15:31.389 "peer_address": { 00:15:31.389 "trtype": "TCP", 00:15:31.389 "adrfam": "IPv4", 00:15:31.389 "traddr": "10.0.0.1", 00:15:31.389 "trsvcid": "35104" 00:15:31.389 }, 00:15:31.389 "auth": { 00:15:31.389 "state": "completed", 00:15:31.389 "digest": "sha384", 00:15:31.389 "dhgroup": "null" 00:15:31.389 } 00:15:31.389 } 00:15:31.389 ]' 00:15:31.389 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.389 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.389 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.389 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:31.389 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.389 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.389 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.389 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.953 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:15:31.953 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:15:32.521 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.521 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:32.521 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.521 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.521 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.521 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.521 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.521 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:32.521 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:32.781 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:32.781 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.781 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:32.781 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:32.781 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:32.781 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.781 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.781 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.781 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.781 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.781 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.781 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.781 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.349 00:15:33.349 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.349 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.349 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.607 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.607 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.607 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.607 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.607 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.607 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.607 { 00:15:33.607 "cntlid": 57, 00:15:33.607 "qid": 0, 00:15:33.607 "state": "enabled", 00:15:33.607 "thread": "nvmf_tgt_poll_group_000", 00:15:33.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:33.607 "listen_address": { 00:15:33.607 "trtype": "TCP", 00:15:33.607 "adrfam": "IPv4", 00:15:33.607 "traddr": "10.0.0.2", 00:15:33.607 "trsvcid": "4420" 00:15:33.607 }, 00:15:33.607 "peer_address": { 00:15:33.607 "trtype": "TCP", 00:15:33.607 "adrfam": "IPv4", 00:15:33.607 "traddr": "10.0.0.1", 00:15:33.607 "trsvcid": "35136" 00:15:33.607 }, 00:15:33.607 "auth": { 00:15:33.607 "state": "completed", 00:15:33.607 "digest": "sha384", 00:15:33.607 "dhgroup": "ffdhe2048" 00:15:33.607 } 00:15:33.607 } 00:15:33.607 ]' 00:15:33.607 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.607 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.607 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.607 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:33.607 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.866 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.866 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.866 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.124 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:15:34.124 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:15:34.693 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.693 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:34.693 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.693 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.693 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.693 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.693 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:34.693 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:34.956 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:34.956 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.956 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.956 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:34.956 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:34.956 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.956 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.956 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.956 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.215 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.215 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.215 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.215 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.473 00:15:35.473 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.473 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.473 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.731 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.731 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.731 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.731 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.731 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.731 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.731 { 00:15:35.731 "cntlid": 59, 00:15:35.731 "qid": 0, 00:15:35.731 "state": "enabled", 00:15:35.731 "thread": "nvmf_tgt_poll_group_000", 00:15:35.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:35.731 "listen_address": { 00:15:35.731 "trtype": "TCP", 00:15:35.731 "adrfam": "IPv4", 00:15:35.731 "traddr": "10.0.0.2", 00:15:35.731 "trsvcid": "4420" 00:15:35.731 }, 00:15:35.731 "peer_address": { 00:15:35.731 "trtype": "TCP", 00:15:35.731 "adrfam": "IPv4", 00:15:35.731 "traddr": "10.0.0.1", 00:15:35.731 "trsvcid": "35150" 00:15:35.731 }, 00:15:35.731 "auth": { 00:15:35.731 "state": "completed", 00:15:35.731 "digest": "sha384", 00:15:35.731 "dhgroup": "ffdhe2048" 00:15:35.731 } 00:15:35.731 } 00:15:35.731 ]' 00:15:35.731 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.731 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.731 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.731 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:35.731 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.991 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.991 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.991 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.249 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:15:36.249 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:15:36.816 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.816 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:36.816 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.816 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.074 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.074 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.074 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:37.074 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:37.333 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:37.333 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.333 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:37.333 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:37.333 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:37.333 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.333 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.333 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.333 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.333 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.333 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.333 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.333 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.899 00:15:37.899 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.899 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.899 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.157 { 00:15:38.157 "cntlid": 61, 00:15:38.157 "qid": 0, 00:15:38.157 "state": "enabled", 00:15:38.157 "thread": "nvmf_tgt_poll_group_000", 00:15:38.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:38.157 "listen_address": { 00:15:38.157 "trtype": "TCP", 00:15:38.157 "adrfam": "IPv4", 00:15:38.157 "traddr": "10.0.0.2", 00:15:38.157 "trsvcid": "4420" 00:15:38.157 }, 00:15:38.157 "peer_address": { 00:15:38.157 "trtype": "TCP", 00:15:38.157 "adrfam": "IPv4", 00:15:38.157 "traddr": "10.0.0.1", 00:15:38.157 "trsvcid": "41834" 00:15:38.157 }, 00:15:38.157 "auth": { 00:15:38.157 "state": "completed", 00:15:38.157 "digest": "sha384", 00:15:38.157 "dhgroup": "ffdhe2048" 00:15:38.157 } 00:15:38.157 } 00:15:38.157 ]' 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.157 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.723 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:38.723 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:39.290 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.290 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:39.290 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.290 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.290 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.290 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.290 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:39.290 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:39.856 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:39.856 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.856 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:39.856 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:39.856 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:39.856 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.856 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:15:39.856 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.856 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.856 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.856 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:39.856 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.856 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.114 00:15:40.114 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.114 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.114 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.395 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.395 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.395 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.395 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.395 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.395 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.395 { 00:15:40.395 "cntlid": 63, 00:15:40.395 "qid": 0, 00:15:40.395 "state": "enabled", 00:15:40.395 "thread": "nvmf_tgt_poll_group_000", 00:15:40.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:40.395 "listen_address": { 00:15:40.395 "trtype": "TCP", 00:15:40.395 "adrfam": "IPv4", 00:15:40.395 "traddr": "10.0.0.2", 00:15:40.395 "trsvcid": "4420" 00:15:40.395 }, 00:15:40.395 "peer_address": { 00:15:40.395 "trtype": "TCP", 00:15:40.395 "adrfam": "IPv4", 00:15:40.395 "traddr": "10.0.0.1", 00:15:40.395 "trsvcid": "41862" 00:15:40.395 }, 00:15:40.395 "auth": { 00:15:40.395 "state": "completed", 00:15:40.395 "digest": "sha384", 00:15:40.395 "dhgroup": "ffdhe2048" 00:15:40.395 } 00:15:40.395 } 00:15:40.395 ]' 00:15:40.395 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.395 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.395 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.682 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:40.682 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.682 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.682 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.682 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.941 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:15:40.941 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:15:41.508 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.508 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:41.508 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.508 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.766 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.766 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.766 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.766 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:41.766 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:42.023 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:42.023 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.023 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.023 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:42.023 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.023 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.023 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.023 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.023 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.023 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.023 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.023 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.023 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.589 00:15:42.589 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.589 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.589 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.846 { 00:15:42.846 "cntlid": 65, 00:15:42.846 "qid": 0, 00:15:42.846 "state": "enabled", 00:15:42.846 "thread": "nvmf_tgt_poll_group_000", 00:15:42.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:42.846 "listen_address": { 00:15:42.846 "trtype": "TCP", 00:15:42.846 "adrfam": "IPv4", 00:15:42.846 "traddr": "10.0.0.2", 00:15:42.846 "trsvcid": "4420" 00:15:42.846 }, 00:15:42.846 "peer_address": { 00:15:42.846 "trtype": "TCP", 00:15:42.846 "adrfam": "IPv4", 00:15:42.846 "traddr": "10.0.0.1", 00:15:42.846 "trsvcid": "41892" 00:15:42.846 }, 00:15:42.846 "auth": { 00:15:42.846 "state": "completed", 00:15:42.846 "digest": "sha384", 00:15:42.846 "dhgroup": "ffdhe3072" 00:15:42.846 } 00:15:42.846 } 00:15:42.846 ]' 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.846 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.411 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:15:43.411 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:15:43.975 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.975 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:43.975 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.975 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.975 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.975 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.975 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.975 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:44.233 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:44.233 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.233 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.233 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:44.233 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.233 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.233 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.233 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.233 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.233 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.233 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.233 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.233 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.798 00:15:44.798 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.798 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.798 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.055 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.055 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.055 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.055 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.055 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.055 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.055 { 00:15:45.055 "cntlid": 67, 00:15:45.055 "qid": 0, 00:15:45.055 "state": "enabled", 00:15:45.055 "thread": "nvmf_tgt_poll_group_000", 00:15:45.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:45.055 "listen_address": { 00:15:45.055 "trtype": "TCP", 00:15:45.055 "adrfam": "IPv4", 00:15:45.055 "traddr": "10.0.0.2", 00:15:45.055 "trsvcid": "4420" 00:15:45.055 }, 00:15:45.055 "peer_address": { 00:15:45.055 "trtype": "TCP", 00:15:45.055 "adrfam": "IPv4", 00:15:45.055 "traddr": "10.0.0.1", 00:15:45.055 "trsvcid": "41936" 00:15:45.055 }, 00:15:45.055 "auth": { 00:15:45.055 "state": "completed", 00:15:45.055 "digest": "sha384", 00:15:45.055 "dhgroup": "ffdhe3072" 00:15:45.055 } 00:15:45.055 } 00:15:45.055 ]' 00:15:45.055 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.055 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.055 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.055 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:45.055 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.312 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.312 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.312 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.570 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:15:45.570 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:15:46.527 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.527 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:46.527 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.527 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.527 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.527 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.527 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:46.527 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:46.785 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:46.785 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.785 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.785 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:46.785 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.785 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.785 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.785 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.785 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.785 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.785 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.785 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.785 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.043 00:15:47.043 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.043 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.043 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.300 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.300 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.300 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.300 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.300 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.300 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.300 { 00:15:47.300 "cntlid": 69, 00:15:47.300 "qid": 0, 00:15:47.300 "state": "enabled", 00:15:47.300 "thread": "nvmf_tgt_poll_group_000", 00:15:47.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:47.300 "listen_address": { 00:15:47.300 "trtype": "TCP", 00:15:47.300 "adrfam": "IPv4", 00:15:47.300 "traddr": "10.0.0.2", 00:15:47.300 "trsvcid": "4420" 00:15:47.300 }, 00:15:47.300 "peer_address": { 00:15:47.300 "trtype": "TCP", 00:15:47.300 "adrfam": "IPv4", 00:15:47.300 "traddr": "10.0.0.1", 00:15:47.300 "trsvcid": "57078" 00:15:47.300 }, 00:15:47.300 "auth": { 00:15:47.300 "state": "completed", 00:15:47.300 "digest": "sha384", 00:15:47.300 "dhgroup": "ffdhe3072" 00:15:47.300 } 00:15:47.300 } 00:15:47.300 ]' 00:15:47.300 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.559 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.559 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.559 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:47.559 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.559 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.559 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.559 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.817 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:47.817 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:48.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:48.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:49.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:49.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:49.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:15:49.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:49.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.269 00:15:49.527 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.527 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.527 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.786 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.786 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.786 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.787 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.787 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.787 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.787 { 00:15:49.787 "cntlid": 71, 00:15:49.787 "qid": 0, 00:15:49.787 "state": "enabled", 00:15:49.787 "thread": "nvmf_tgt_poll_group_000", 00:15:49.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:49.787 "listen_address": { 00:15:49.787 "trtype": "TCP", 00:15:49.787 "adrfam": "IPv4", 00:15:49.787 "traddr": "10.0.0.2", 00:15:49.787 "trsvcid": "4420" 00:15:49.787 }, 00:15:49.787 "peer_address": { 00:15:49.787 "trtype": "TCP", 00:15:49.787 "adrfam": "IPv4", 00:15:49.787 "traddr": "10.0.0.1", 00:15:49.787 "trsvcid": "57086" 00:15:49.787 }, 00:15:49.787 "auth": { 00:15:49.787 "state": "completed", 00:15:49.787 "digest": "sha384", 00:15:49.787 "dhgroup": "ffdhe3072" 00:15:49.787 } 00:15:49.787 } 00:15:49.787 ]' 00:15:49.787 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.787 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.787 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.787 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.787 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.787 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.787 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.787 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.353 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:15:50.354 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:15:50.920 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.920 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:50.920 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.920 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.920 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.920 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.920 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.920 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:50.920 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:51.487 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:51.487 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.487 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.487 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:51.487 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:51.487 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.487 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.487 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.487 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.487 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.487 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.487 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.487 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.746 00:15:51.746 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.746 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.746 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.019 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.019 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.019 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.019 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.287 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.287 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.287 { 00:15:52.287 "cntlid": 73, 00:15:52.287 "qid": 0, 00:15:52.287 "state": "enabled", 00:15:52.287 "thread": "nvmf_tgt_poll_group_000", 00:15:52.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:52.287 "listen_address": { 00:15:52.287 "trtype": "TCP", 00:15:52.287 "adrfam": "IPv4", 00:15:52.287 "traddr": "10.0.0.2", 00:15:52.287 "trsvcid": "4420" 00:15:52.287 }, 00:15:52.287 "peer_address": { 00:15:52.287 "trtype": "TCP", 00:15:52.287 "adrfam": "IPv4", 00:15:52.287 "traddr": "10.0.0.1", 00:15:52.287 "trsvcid": "57112" 00:15:52.287 }, 00:15:52.287 "auth": { 00:15:52.287 "state": "completed", 00:15:52.287 "digest": "sha384", 00:15:52.287 "dhgroup": "ffdhe4096" 00:15:52.287 } 00:15:52.287 } 00:15:52.287 ]' 00:15:52.287 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.287 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.287 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.287 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:52.288 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.288 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.288 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.288 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.546 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:15:52.546 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:15:53.482 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.482 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:53.482 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.482 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.482 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.482 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.482 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:53.482 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:53.740 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:53.740 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.740 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.740 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:53.740 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.740 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.740 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.740 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.740 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.740 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.740 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.740 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.740 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.998 00:15:54.260 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.260 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.260 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.518 { 00:15:54.518 "cntlid": 75, 00:15:54.518 "qid": 0, 00:15:54.518 "state": "enabled", 00:15:54.518 "thread": "nvmf_tgt_poll_group_000", 00:15:54.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:54.518 "listen_address": { 00:15:54.518 "trtype": "TCP", 00:15:54.518 "adrfam": "IPv4", 00:15:54.518 "traddr": "10.0.0.2", 00:15:54.518 "trsvcid": "4420" 00:15:54.518 }, 00:15:54.518 "peer_address": { 00:15:54.518 "trtype": "TCP", 00:15:54.518 "adrfam": "IPv4", 00:15:54.518 "traddr": "10.0.0.1", 00:15:54.518 "trsvcid": "57126" 00:15:54.518 }, 00:15:54.518 "auth": { 00:15:54.518 "state": "completed", 00:15:54.518 "digest": "sha384", 00:15:54.518 "dhgroup": "ffdhe4096" 00:15:54.518 } 00:15:54.518 } 00:15:54.518 ]' 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.518 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.084 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:15:55.084 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:15:55.651 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.651 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:55.651 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.651 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.651 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.651 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.651 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:55.651 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.217 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:56.217 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.217 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.217 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:56.217 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:56.217 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.217 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.217 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.217 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.217 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.217 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.217 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.217 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.477 00:15:56.477 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.477 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.477 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.044 { 00:15:57.044 "cntlid": 77, 00:15:57.044 "qid": 0, 00:15:57.044 "state": "enabled", 00:15:57.044 "thread": "nvmf_tgt_poll_group_000", 00:15:57.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:57.044 "listen_address": { 00:15:57.044 "trtype": "TCP", 00:15:57.044 "adrfam": "IPv4", 00:15:57.044 "traddr": "10.0.0.2", 00:15:57.044 "trsvcid": "4420" 00:15:57.044 }, 00:15:57.044 "peer_address": { 00:15:57.044 "trtype": "TCP", 00:15:57.044 "adrfam": "IPv4", 00:15:57.044 "traddr": "10.0.0.1", 00:15:57.044 "trsvcid": "39528" 00:15:57.044 }, 00:15:57.044 "auth": { 00:15:57.044 "state": "completed", 00:15:57.044 "digest": "sha384", 00:15:57.044 "dhgroup": "ffdhe4096" 00:15:57.044 } 00:15:57.044 } 00:15:57.044 ]' 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.044 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.303 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:57.303 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:15:58.237 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.237 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:15:58.238 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.238 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.238 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.238 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.238 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:58.238 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:58.496 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:58.496 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.496 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.496 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:58.496 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.496 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.496 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:15:58.496 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.496 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.496 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.496 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.496 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.496 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.062 00:15:59.062 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.062 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.062 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.320 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.320 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.320 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.320 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.320 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.320 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.320 { 00:15:59.320 "cntlid": 79, 00:15:59.320 "qid": 0, 00:15:59.320 "state": "enabled", 00:15:59.320 "thread": "nvmf_tgt_poll_group_000", 00:15:59.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:15:59.320 "listen_address": { 00:15:59.320 "trtype": "TCP", 00:15:59.320 "adrfam": "IPv4", 00:15:59.320 "traddr": "10.0.0.2", 00:15:59.320 "trsvcid": "4420" 00:15:59.320 }, 00:15:59.320 "peer_address": { 00:15:59.320 "trtype": "TCP", 00:15:59.320 "adrfam": "IPv4", 00:15:59.320 "traddr": "10.0.0.1", 00:15:59.320 "trsvcid": "39562" 00:15:59.320 }, 00:15:59.320 "auth": { 00:15:59.320 "state": "completed", 00:15:59.320 "digest": "sha384", 00:15:59.320 "dhgroup": "ffdhe4096" 00:15:59.320 } 00:15:59.320 } 00:15:59.320 ]' 00:15:59.320 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.320 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.320 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.320 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.579 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.579 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.579 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.579 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.837 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:15:59.837 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:16:00.770 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.770 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:00.770 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.770 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.770 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.770 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.770 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.770 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:00.770 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:01.029 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:01.029 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.029 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.029 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:01.029 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:01.029 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.029 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.029 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.029 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.029 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.029 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.029 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.029 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.596 00:16:01.596 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.596 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.596 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.854 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.854 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.854 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.854 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.854 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.854 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.854 { 00:16:01.854 "cntlid": 81, 00:16:01.854 "qid": 0, 00:16:01.854 "state": "enabled", 00:16:01.854 "thread": "nvmf_tgt_poll_group_000", 00:16:01.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:01.854 "listen_address": { 00:16:01.854 "trtype": "TCP", 00:16:01.854 "adrfam": "IPv4", 00:16:01.854 "traddr": "10.0.0.2", 00:16:01.854 "trsvcid": "4420" 00:16:01.854 }, 00:16:01.854 "peer_address": { 00:16:01.854 "trtype": "TCP", 00:16:01.854 "adrfam": "IPv4", 00:16:01.854 "traddr": "10.0.0.1", 00:16:01.854 "trsvcid": "39588" 00:16:01.854 }, 00:16:01.854 "auth": { 00:16:01.854 "state": "completed", 00:16:01.854 "digest": "sha384", 00:16:01.854 "dhgroup": "ffdhe6144" 00:16:01.854 } 00:16:01.854 } 00:16:01.854 ]' 00:16:01.854 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.854 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.113 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.113 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:02.113 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.113 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.113 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.113 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.375 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:16:02.375 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:16:03.310 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.311 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:03.311 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.311 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.311 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.311 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.311 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:03.311 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:03.569 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:03.569 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.569 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.569 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:03.569 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:03.569 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.569 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.569 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.569 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.569 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.569 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.569 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.569 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.137 00:16:04.137 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.137 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.137 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.398 { 00:16:04.398 "cntlid": 83, 00:16:04.398 "qid": 0, 00:16:04.398 "state": "enabled", 00:16:04.398 "thread": "nvmf_tgt_poll_group_000", 00:16:04.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:04.398 "listen_address": { 00:16:04.398 "trtype": "TCP", 00:16:04.398 "adrfam": "IPv4", 00:16:04.398 "traddr": "10.0.0.2", 00:16:04.398 "trsvcid": "4420" 00:16:04.398 }, 00:16:04.398 "peer_address": { 00:16:04.398 "trtype": "TCP", 00:16:04.398 "adrfam": "IPv4", 00:16:04.398 "traddr": "10.0.0.1", 00:16:04.398 "trsvcid": "39620" 00:16:04.398 }, 00:16:04.398 "auth": { 00:16:04.398 "state": "completed", 00:16:04.398 "digest": "sha384", 00:16:04.398 "dhgroup": "ffdhe6144" 00:16:04.398 } 00:16:04.398 } 00:16:04.398 ]' 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.398 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.656 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:16:04.656 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:16:05.592 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.592 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:05.592 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.592 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.592 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.592 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.592 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:05.592 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:05.851 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:05.851 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.851 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.851 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:05.851 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:05.851 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.851 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.851 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.851 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.851 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.851 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.851 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.851 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.418 00:16:06.418 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.418 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.418 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.677 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.677 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.677 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.677 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.677 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.677 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.677 { 00:16:06.677 "cntlid": 85, 00:16:06.677 "qid": 0, 00:16:06.677 "state": "enabled", 00:16:06.677 "thread": "nvmf_tgt_poll_group_000", 00:16:06.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:06.677 "listen_address": { 00:16:06.677 "trtype": "TCP", 00:16:06.677 "adrfam": "IPv4", 00:16:06.677 "traddr": "10.0.0.2", 00:16:06.677 "trsvcid": "4420" 00:16:06.677 }, 00:16:06.677 "peer_address": { 00:16:06.677 "trtype": "TCP", 00:16:06.677 "adrfam": "IPv4", 00:16:06.677 "traddr": "10.0.0.1", 00:16:06.677 "trsvcid": "52960" 00:16:06.677 }, 00:16:06.677 "auth": { 00:16:06.677 "state": "completed", 00:16:06.677 "digest": "sha384", 00:16:06.677 "dhgroup": "ffdhe6144" 00:16:06.677 } 00:16:06.677 } 00:16:06.677 ]' 00:16:06.677 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.935 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.935 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.935 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:06.935 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.935 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.935 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.935 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.209 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:16:07.209 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:16:08.164 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.164 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:08.164 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.164 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.164 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.164 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.164 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.164 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.423 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:08.423 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.423 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.423 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:08.423 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:08.423 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.423 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:16:08.423 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.423 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.423 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.423 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:08.423 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.423 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.993 00:16:08.993 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.993 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.993 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.251 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.251 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.251 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.251 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.251 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.251 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.251 { 00:16:09.251 "cntlid": 87, 00:16:09.251 "qid": 0, 00:16:09.251 "state": "enabled", 00:16:09.251 "thread": "nvmf_tgt_poll_group_000", 00:16:09.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:09.251 "listen_address": { 00:16:09.251 "trtype": "TCP", 00:16:09.251 "adrfam": "IPv4", 00:16:09.251 "traddr": "10.0.0.2", 00:16:09.251 "trsvcid": "4420" 00:16:09.251 }, 00:16:09.251 "peer_address": { 00:16:09.251 "trtype": "TCP", 00:16:09.251 "adrfam": "IPv4", 00:16:09.251 "traddr": "10.0.0.1", 00:16:09.251 "trsvcid": "52974" 00:16:09.251 }, 00:16:09.251 "auth": { 00:16:09.251 "state": "completed", 00:16:09.251 "digest": "sha384", 00:16:09.251 "dhgroup": "ffdhe6144" 00:16:09.251 } 00:16:09.251 } 00:16:09.251 ]' 00:16:09.251 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.251 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.251 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.251 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.251 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.251 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.251 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.251 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.817 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:16:09.817 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:16:10.383 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.383 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:10.383 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.383 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.383 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.383 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.383 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.383 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.383 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.948 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:10.948 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.948 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.948 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:10.948 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.948 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.948 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.948 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.948 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.948 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.948 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.948 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.949 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.514 00:16:11.514 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.514 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.514 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.786 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.786 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.786 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.786 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.786 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.786 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.786 { 00:16:11.786 "cntlid": 89, 00:16:11.786 "qid": 0, 00:16:11.786 "state": "enabled", 00:16:11.786 "thread": "nvmf_tgt_poll_group_000", 00:16:11.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:11.786 "listen_address": { 00:16:11.786 "trtype": "TCP", 00:16:11.786 "adrfam": "IPv4", 00:16:11.786 "traddr": "10.0.0.2", 00:16:11.786 "trsvcid": "4420" 00:16:11.786 }, 00:16:11.786 "peer_address": { 00:16:11.786 "trtype": "TCP", 00:16:11.786 "adrfam": "IPv4", 00:16:11.786 "traddr": "10.0.0.1", 00:16:11.786 "trsvcid": "53006" 00:16:11.786 }, 00:16:11.786 "auth": { 00:16:11.786 "state": "completed", 00:16:11.786 "digest": "sha384", 00:16:11.786 "dhgroup": "ffdhe8192" 00:16:11.786 } 00:16:11.786 } 00:16:11.786 ]' 00:16:11.786 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.052 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.052 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.052 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.052 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.052 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.052 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.052 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.310 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:16:12.310 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:16:13.243 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.243 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:13.243 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.243 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.243 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.243 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.243 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.243 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.501 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:13.501 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.501 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.501 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:13.501 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:13.501 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.501 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.501 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.501 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.501 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.501 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.501 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.501 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.067 00:16:14.067 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.067 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.067 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.326 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.326 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.326 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.326 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.326 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.326 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.326 { 00:16:14.326 "cntlid": 91, 00:16:14.326 "qid": 0, 00:16:14.326 "state": "enabled", 00:16:14.326 "thread": "nvmf_tgt_poll_group_000", 00:16:14.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:14.326 "listen_address": { 00:16:14.326 "trtype": "TCP", 00:16:14.326 "adrfam": "IPv4", 00:16:14.326 "traddr": "10.0.0.2", 00:16:14.326 "trsvcid": "4420" 00:16:14.326 }, 00:16:14.326 "peer_address": { 00:16:14.326 "trtype": "TCP", 00:16:14.326 "adrfam": "IPv4", 00:16:14.326 "traddr": "10.0.0.1", 00:16:14.326 "trsvcid": "53022" 00:16:14.326 }, 00:16:14.326 "auth": { 00:16:14.326 "state": "completed", 00:16:14.326 "digest": "sha384", 00:16:14.326 "dhgroup": "ffdhe8192" 00:16:14.326 } 00:16:14.326 } 00:16:14.326 ]' 00:16:14.326 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.585 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.585 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.585 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.585 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.585 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.585 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.586 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.849 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:16:14.849 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.785 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.786 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.786 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.721 00:16:16.721 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.721 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.721 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.980 { 00:16:16.980 "cntlid": 93, 00:16:16.980 "qid": 0, 00:16:16.980 "state": "enabled", 00:16:16.980 "thread": "nvmf_tgt_poll_group_000", 00:16:16.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:16.980 "listen_address": { 00:16:16.980 "trtype": "TCP", 00:16:16.980 "adrfam": "IPv4", 00:16:16.980 "traddr": "10.0.0.2", 00:16:16.980 "trsvcid": "4420" 00:16:16.980 }, 00:16:16.980 "peer_address": { 00:16:16.980 "trtype": "TCP", 00:16:16.980 "adrfam": "IPv4", 00:16:16.980 "traddr": "10.0.0.1", 00:16:16.980 "trsvcid": "46442" 00:16:16.980 }, 00:16:16.980 "auth": { 00:16:16.980 "state": "completed", 00:16:16.980 "digest": "sha384", 00:16:16.980 "dhgroup": "ffdhe8192" 00:16:16.980 } 00:16:16.980 } 00:16:16.980 ]' 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.980 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.239 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:16:17.239 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:16:18.175 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.175 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:18.175 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.175 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.175 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.175 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.175 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.175 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.433 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:18.433 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.433 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.433 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:18.433 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.433 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.433 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:16:18.433 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.433 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.433 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.433 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.433 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.433 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.002 00:16:19.002 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.002 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.002 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.571 { 00:16:19.571 "cntlid": 95, 00:16:19.571 "qid": 0, 00:16:19.571 "state": "enabled", 00:16:19.571 "thread": "nvmf_tgt_poll_group_000", 00:16:19.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:19.571 "listen_address": { 00:16:19.571 "trtype": "TCP", 00:16:19.571 "adrfam": "IPv4", 00:16:19.571 "traddr": "10.0.0.2", 00:16:19.571 "trsvcid": "4420" 00:16:19.571 }, 00:16:19.571 "peer_address": { 00:16:19.571 "trtype": "TCP", 00:16:19.571 "adrfam": "IPv4", 00:16:19.571 "traddr": "10.0.0.1", 00:16:19.571 "trsvcid": "46458" 00:16:19.571 }, 00:16:19.571 "auth": { 00:16:19.571 "state": "completed", 00:16:19.571 "digest": "sha384", 00:16:19.571 "dhgroup": "ffdhe8192" 00:16:19.571 } 00:16:19.571 } 00:16:19.571 ]' 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.571 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.829 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:16:19.829 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:16:20.765 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.765 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:20.765 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.765 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.765 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.765 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:20.765 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.765 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.765 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:20.765 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:21.024 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:21.025 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.025 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:21.025 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.025 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:21.025 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.025 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.025 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.025 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.025 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.025 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.025 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.025 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.614 00:16:21.614 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.614 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.614 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.872 { 00:16:21.872 "cntlid": 97, 00:16:21.872 "qid": 0, 00:16:21.872 "state": "enabled", 00:16:21.872 "thread": "nvmf_tgt_poll_group_000", 00:16:21.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:21.872 "listen_address": { 00:16:21.872 "trtype": "TCP", 00:16:21.872 "adrfam": "IPv4", 00:16:21.872 "traddr": "10.0.0.2", 00:16:21.872 "trsvcid": "4420" 00:16:21.872 }, 00:16:21.872 "peer_address": { 00:16:21.872 "trtype": "TCP", 00:16:21.872 "adrfam": "IPv4", 00:16:21.872 "traddr": "10.0.0.1", 00:16:21.872 "trsvcid": "46478" 00:16:21.872 }, 00:16:21.872 "auth": { 00:16:21.872 "state": "completed", 00:16:21.872 "digest": "sha512", 00:16:21.872 "dhgroup": "null" 00:16:21.872 } 00:16:21.872 } 00:16:21.872 ]' 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.872 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.438 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:16:22.438 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:16:23.004 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.004 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:23.004 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.004 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.004 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.004 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.004 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:23.004 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:23.262 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:23.262 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.262 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.262 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.262 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.262 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.262 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.262 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.262 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.262 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.262 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.262 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.262 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.835 00:16:23.835 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.835 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.835 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.094 { 00:16:24.094 "cntlid": 99, 00:16:24.094 "qid": 0, 00:16:24.094 "state": "enabled", 00:16:24.094 "thread": "nvmf_tgt_poll_group_000", 00:16:24.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:24.094 "listen_address": { 00:16:24.094 "trtype": "TCP", 00:16:24.094 "adrfam": "IPv4", 00:16:24.094 "traddr": "10.0.0.2", 00:16:24.094 "trsvcid": "4420" 00:16:24.094 }, 00:16:24.094 "peer_address": { 00:16:24.094 "trtype": "TCP", 00:16:24.094 "adrfam": "IPv4", 00:16:24.094 "traddr": "10.0.0.1", 00:16:24.094 "trsvcid": "46508" 00:16:24.094 }, 00:16:24.094 "auth": { 00:16:24.094 "state": "completed", 00:16:24.094 "digest": "sha512", 00:16:24.094 "dhgroup": "null" 00:16:24.094 } 00:16:24.094 } 00:16:24.094 ]' 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.094 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.353 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:16:24.353 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:16:25.290 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.290 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:25.290 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.290 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.290 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.290 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.290 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.290 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.549 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:25.549 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.549 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.549 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.549 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.549 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.549 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.549 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.549 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.549 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.549 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.549 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.549 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.807 00:16:25.807 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.807 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.807 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.374 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.374 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.374 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.374 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.374 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.374 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.374 { 00:16:26.374 "cntlid": 101, 00:16:26.374 "qid": 0, 00:16:26.374 "state": "enabled", 00:16:26.374 "thread": "nvmf_tgt_poll_group_000", 00:16:26.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:26.374 "listen_address": { 00:16:26.374 "trtype": "TCP", 00:16:26.374 "adrfam": "IPv4", 00:16:26.374 "traddr": "10.0.0.2", 00:16:26.374 "trsvcid": "4420" 00:16:26.374 }, 00:16:26.374 "peer_address": { 00:16:26.374 "trtype": "TCP", 00:16:26.374 "adrfam": "IPv4", 00:16:26.374 "traddr": "10.0.0.1", 00:16:26.374 "trsvcid": "46536" 00:16:26.374 }, 00:16:26.374 "auth": { 00:16:26.374 "state": "completed", 00:16:26.374 "digest": "sha512", 00:16:26.374 "dhgroup": "null" 00:16:26.374 } 00:16:26.374 } 00:16:26.374 ]' 00:16:26.374 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.374 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.374 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.374 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.374 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.374 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.374 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.374 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.633 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:16:26.633 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:16:27.567 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.568 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:27.568 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.568 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.568 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.568 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.568 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:27.568 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:27.826 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:27.826 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.826 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.826 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.826 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.826 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.826 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:16:27.826 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.826 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.826 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.826 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.826 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.826 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.084 00:16:28.084 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.084 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.084 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.346 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.346 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.346 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.346 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.346 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.346 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.346 { 00:16:28.346 "cntlid": 103, 00:16:28.346 "qid": 0, 00:16:28.346 "state": "enabled", 00:16:28.346 "thread": "nvmf_tgt_poll_group_000", 00:16:28.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:28.346 "listen_address": { 00:16:28.346 "trtype": "TCP", 00:16:28.346 "adrfam": "IPv4", 00:16:28.346 "traddr": "10.0.0.2", 00:16:28.346 "trsvcid": "4420" 00:16:28.346 }, 00:16:28.346 "peer_address": { 00:16:28.346 "trtype": "TCP", 00:16:28.346 "adrfam": "IPv4", 00:16:28.346 "traddr": "10.0.0.1", 00:16:28.346 "trsvcid": "55092" 00:16:28.346 }, 00:16:28.346 "auth": { 00:16:28.346 "state": "completed", 00:16:28.346 "digest": "sha512", 00:16:28.346 "dhgroup": "null" 00:16:28.346 } 00:16:28.346 } 00:16:28.346 ]' 00:16:28.346 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.606 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.606 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.606 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.606 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.606 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.606 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.606 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.864 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:16:28.864 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:16:29.797 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.797 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:29.797 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.797 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.797 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.797 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.797 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.797 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.797 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:30.055 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:30.055 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.055 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.055 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:30.055 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:30.055 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.055 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.055 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.055 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.055 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.055 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.055 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.055 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.315 00:16:30.315 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.315 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.315 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.600 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.600 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.600 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.600 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.600 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.600 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.600 { 00:16:30.600 "cntlid": 105, 00:16:30.600 "qid": 0, 00:16:30.600 "state": "enabled", 00:16:30.600 "thread": "nvmf_tgt_poll_group_000", 00:16:30.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:30.600 "listen_address": { 00:16:30.600 "trtype": "TCP", 00:16:30.600 "adrfam": "IPv4", 00:16:30.600 "traddr": "10.0.0.2", 00:16:30.600 "trsvcid": "4420" 00:16:30.600 }, 00:16:30.600 "peer_address": { 00:16:30.600 "trtype": "TCP", 00:16:30.600 "adrfam": "IPv4", 00:16:30.600 "traddr": "10.0.0.1", 00:16:30.600 "trsvcid": "55108" 00:16:30.600 }, 00:16:30.600 "auth": { 00:16:30.600 "state": "completed", 00:16:30.600 "digest": "sha512", 00:16:30.600 "dhgroup": "ffdhe2048" 00:16:30.600 } 00:16:30.600 } 00:16:30.600 ]' 00:16:30.600 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.882 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.882 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.882 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.882 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.882 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.882 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.882 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.139 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:16:31.139 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:16:32.512 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.512 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:32.512 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.512 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.512 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.512 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.512 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:32.512 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:32.771 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:32.771 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.771 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.771 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.771 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:32.771 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.771 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.771 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.771 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.771 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.771 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.771 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.771 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.028 00:16:33.028 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.028 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.028 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.593 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.593 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.593 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.593 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.593 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.593 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.593 { 00:16:33.593 "cntlid": 107, 00:16:33.593 "qid": 0, 00:16:33.593 "state": "enabled", 00:16:33.593 "thread": "nvmf_tgt_poll_group_000", 00:16:33.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:33.593 "listen_address": { 00:16:33.593 "trtype": "TCP", 00:16:33.593 "adrfam": "IPv4", 00:16:33.593 "traddr": "10.0.0.2", 00:16:33.593 "trsvcid": "4420" 00:16:33.593 }, 00:16:33.593 "peer_address": { 00:16:33.593 "trtype": "TCP", 00:16:33.593 "adrfam": "IPv4", 00:16:33.593 "traddr": "10.0.0.1", 00:16:33.593 "trsvcid": "55140" 00:16:33.593 }, 00:16:33.593 "auth": { 00:16:33.593 "state": "completed", 00:16:33.593 "digest": "sha512", 00:16:33.593 "dhgroup": "ffdhe2048" 00:16:33.593 } 00:16:33.593 } 00:16:33.593 ]' 00:16:33.593 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.594 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.594 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.594 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.594 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.594 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.594 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.594 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.851 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:16:33.851 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:16:34.787 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.787 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:34.787 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.787 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.787 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.787 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.787 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:34.787 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.354 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:35.354 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.354 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.354 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.354 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.354 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.354 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.354 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.354 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.354 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.354 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.354 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.354 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.921 00:16:35.921 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.921 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.921 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.178 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.178 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.178 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.178 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.178 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.178 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.178 { 00:16:36.178 "cntlid": 109, 00:16:36.178 "qid": 0, 00:16:36.178 "state": "enabled", 00:16:36.178 "thread": "nvmf_tgt_poll_group_000", 00:16:36.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:36.178 "listen_address": { 00:16:36.178 "trtype": "TCP", 00:16:36.178 "adrfam": "IPv4", 00:16:36.178 "traddr": "10.0.0.2", 00:16:36.178 "trsvcid": "4420" 00:16:36.178 }, 00:16:36.178 "peer_address": { 00:16:36.178 "trtype": "TCP", 00:16:36.178 "adrfam": "IPv4", 00:16:36.178 "traddr": "10.0.0.1", 00:16:36.178 "trsvcid": "55166" 00:16:36.178 }, 00:16:36.178 "auth": { 00:16:36.178 "state": "completed", 00:16:36.178 "digest": "sha512", 00:16:36.178 "dhgroup": "ffdhe2048" 00:16:36.178 } 00:16:36.178 } 00:16:36.178 ]' 00:16:36.178 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.437 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.437 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.437 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.437 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.437 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.437 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.437 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.695 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:16:36.695 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:16:37.629 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.629 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:37.629 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.629 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.629 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.629 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.629 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.629 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:38.195 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:38.195 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.195 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.195 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:38.195 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:38.195 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.195 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:16:38.195 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.195 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.195 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.195 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.195 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.195 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.760 00:16:38.760 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.760 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.760 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.018 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.019 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.019 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.019 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.019 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.019 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.019 { 00:16:39.019 "cntlid": 111, 00:16:39.019 "qid": 0, 00:16:39.019 "state": "enabled", 00:16:39.019 "thread": "nvmf_tgt_poll_group_000", 00:16:39.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:39.019 "listen_address": { 00:16:39.019 "trtype": "TCP", 00:16:39.019 "adrfam": "IPv4", 00:16:39.019 "traddr": "10.0.0.2", 00:16:39.019 "trsvcid": "4420" 00:16:39.019 }, 00:16:39.019 "peer_address": { 00:16:39.019 "trtype": "TCP", 00:16:39.019 "adrfam": "IPv4", 00:16:39.019 "traddr": "10.0.0.1", 00:16:39.019 "trsvcid": "60366" 00:16:39.019 }, 00:16:39.019 "auth": { 00:16:39.019 "state": "completed", 00:16:39.019 "digest": "sha512", 00:16:39.019 "dhgroup": "ffdhe2048" 00:16:39.019 } 00:16:39.019 } 00:16:39.019 ]' 00:16:39.019 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.277 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.277 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.277 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.277 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.277 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.277 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.277 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.535 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:16:39.535 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:16:40.469 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.469 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:40.469 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.469 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.727 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.727 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.727 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.727 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:40.727 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:40.985 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:40.985 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.985 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.985 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.985 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:40.985 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.985 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.985 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.985 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.985 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.985 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.986 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.986 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.550 00:16:41.550 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.550 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.550 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.807 { 00:16:41.807 "cntlid": 113, 00:16:41.807 "qid": 0, 00:16:41.807 "state": "enabled", 00:16:41.807 "thread": "nvmf_tgt_poll_group_000", 00:16:41.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:41.807 "listen_address": { 00:16:41.807 "trtype": "TCP", 00:16:41.807 "adrfam": "IPv4", 00:16:41.807 "traddr": "10.0.0.2", 00:16:41.807 "trsvcid": "4420" 00:16:41.807 }, 00:16:41.807 "peer_address": { 00:16:41.807 "trtype": "TCP", 00:16:41.807 "adrfam": "IPv4", 00:16:41.807 "traddr": "10.0.0.1", 00:16:41.807 "trsvcid": "60400" 00:16:41.807 }, 00:16:41.807 "auth": { 00:16:41.807 "state": "completed", 00:16:41.807 "digest": "sha512", 00:16:41.807 "dhgroup": "ffdhe3072" 00:16:41.807 } 00:16:41.807 } 00:16:41.807 ]' 00:16:41.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.064 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.064 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.064 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.064 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.064 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.669 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:16:42.669 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:16:43.603 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.603 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:43.603 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.603 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.603 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.603 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.603 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.603 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.862 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:43.862 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.862 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.862 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.862 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.862 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.862 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.862 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.862 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.862 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.862 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.862 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.862 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.427 00:16:44.427 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.427 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.428 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.686 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.686 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.686 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.686 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.686 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.686 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.686 { 00:16:44.686 "cntlid": 115, 00:16:44.686 "qid": 0, 00:16:44.686 "state": "enabled", 00:16:44.686 "thread": "nvmf_tgt_poll_group_000", 00:16:44.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:44.686 "listen_address": { 00:16:44.686 "trtype": "TCP", 00:16:44.686 "adrfam": "IPv4", 00:16:44.686 "traddr": "10.0.0.2", 00:16:44.686 "trsvcid": "4420" 00:16:44.686 }, 00:16:44.686 "peer_address": { 00:16:44.686 "trtype": "TCP", 00:16:44.686 "adrfam": "IPv4", 00:16:44.686 "traddr": "10.0.0.1", 00:16:44.686 "trsvcid": "60432" 00:16:44.686 }, 00:16:44.686 "auth": { 00:16:44.686 "state": "completed", 00:16:44.686 "digest": "sha512", 00:16:44.686 "dhgroup": "ffdhe3072" 00:16:44.686 } 00:16:44.686 } 00:16:44.686 ]' 00:16:44.686 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.686 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.686 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.944 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.944 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.944 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.944 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.944 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.202 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:16:45.202 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:16:46.138 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.138 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:46.138 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.138 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.138 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.138 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.138 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.138 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.396 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:46.396 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.396 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.396 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:46.396 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.396 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.396 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.396 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.396 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.396 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.396 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.396 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.396 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.963 00:16:46.963 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.963 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.963 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.221 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.221 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.221 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.221 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.221 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.221 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.221 { 00:16:47.221 "cntlid": 117, 00:16:47.221 "qid": 0, 00:16:47.221 "state": "enabled", 00:16:47.221 "thread": "nvmf_tgt_poll_group_000", 00:16:47.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:47.221 "listen_address": { 00:16:47.221 "trtype": "TCP", 00:16:47.221 "adrfam": "IPv4", 00:16:47.221 "traddr": "10.0.0.2", 00:16:47.221 "trsvcid": "4420" 00:16:47.221 }, 00:16:47.221 "peer_address": { 00:16:47.221 "trtype": "TCP", 00:16:47.221 "adrfam": "IPv4", 00:16:47.221 "traddr": "10.0.0.1", 00:16:47.221 "trsvcid": "59248" 00:16:47.221 }, 00:16:47.221 "auth": { 00:16:47.221 "state": "completed", 00:16:47.221 "digest": "sha512", 00:16:47.221 "dhgroup": "ffdhe3072" 00:16:47.221 } 00:16:47.221 } 00:16:47.221 ]' 00:16:47.221 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.221 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.221 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.479 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:47.479 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.479 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.479 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.479 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.737 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:16:47.737 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:16:48.670 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.670 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:48.670 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.670 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.928 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.928 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.928 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:48.928 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:49.187 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:49.187 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.187 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.187 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:49.187 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.187 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.187 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:16:49.187 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.187 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.187 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.187 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.187 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.187 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.754 00:16:49.754 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.754 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.754 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.012 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.012 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.012 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.012 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.270 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.270 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.270 { 00:16:50.270 "cntlid": 119, 00:16:50.270 "qid": 0, 00:16:50.270 "state": "enabled", 00:16:50.270 "thread": "nvmf_tgt_poll_group_000", 00:16:50.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:50.270 "listen_address": { 00:16:50.270 "trtype": "TCP", 00:16:50.270 "adrfam": "IPv4", 00:16:50.270 "traddr": "10.0.0.2", 00:16:50.270 "trsvcid": "4420" 00:16:50.270 }, 00:16:50.270 "peer_address": { 00:16:50.270 "trtype": "TCP", 00:16:50.270 "adrfam": "IPv4", 00:16:50.270 "traddr": "10.0.0.1", 00:16:50.270 "trsvcid": "59284" 00:16:50.270 }, 00:16:50.270 "auth": { 00:16:50.270 "state": "completed", 00:16:50.270 "digest": "sha512", 00:16:50.270 "dhgroup": "ffdhe3072" 00:16:50.270 } 00:16:50.270 } 00:16:50.270 ]' 00:16:50.270 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.270 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.270 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.270 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.270 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.270 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.270 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.270 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.858 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:16:50.858 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:16:51.425 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.683 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:51.683 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.683 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.683 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.683 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.683 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.683 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:51.683 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:51.941 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:51.941 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.941 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.941 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:51.941 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:51.941 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.941 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.941 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.941 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.941 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.941 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.941 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.941 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.508 00:16:52.508 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.508 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.508 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.766 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.766 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.766 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.766 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.024 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.024 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.024 { 00:16:53.024 "cntlid": 121, 00:16:53.024 "qid": 0, 00:16:53.024 "state": "enabled", 00:16:53.024 "thread": "nvmf_tgt_poll_group_000", 00:16:53.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:53.024 "listen_address": { 00:16:53.024 "trtype": "TCP", 00:16:53.024 "adrfam": "IPv4", 00:16:53.024 "traddr": "10.0.0.2", 00:16:53.024 "trsvcid": "4420" 00:16:53.024 }, 00:16:53.024 "peer_address": { 00:16:53.024 "trtype": "TCP", 00:16:53.024 "adrfam": "IPv4", 00:16:53.024 "traddr": "10.0.0.1", 00:16:53.024 "trsvcid": "59308" 00:16:53.024 }, 00:16:53.024 "auth": { 00:16:53.024 "state": "completed", 00:16:53.024 "digest": "sha512", 00:16:53.024 "dhgroup": "ffdhe4096" 00:16:53.024 } 00:16:53.024 } 00:16:53.024 ]' 00:16:53.024 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.024 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.024 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.024 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.025 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.025 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.025 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.025 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.591 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:16:53.591 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:16:54.524 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.524 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:54.524 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.524 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.524 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.524 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.524 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:54.524 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:54.524 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:54.525 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.525 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.525 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:54.525 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.525 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.525 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.525 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.525 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.790 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.790 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.790 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.790 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.358 00:16:55.358 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.358 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.358 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.616 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.616 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.616 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.616 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.616 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.616 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.616 { 00:16:55.616 "cntlid": 123, 00:16:55.616 "qid": 0, 00:16:55.616 "state": "enabled", 00:16:55.616 "thread": "nvmf_tgt_poll_group_000", 00:16:55.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:55.616 "listen_address": { 00:16:55.616 "trtype": "TCP", 00:16:55.616 "adrfam": "IPv4", 00:16:55.616 "traddr": "10.0.0.2", 00:16:55.616 "trsvcid": "4420" 00:16:55.616 }, 00:16:55.616 "peer_address": { 00:16:55.616 "trtype": "TCP", 00:16:55.616 "adrfam": "IPv4", 00:16:55.616 "traddr": "10.0.0.1", 00:16:55.616 "trsvcid": "59340" 00:16:55.616 }, 00:16:55.616 "auth": { 00:16:55.616 "state": "completed", 00:16:55.616 "digest": "sha512", 00:16:55.616 "dhgroup": "ffdhe4096" 00:16:55.616 } 00:16:55.616 } 00:16:55.616 ]' 00:16:55.616 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.874 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.875 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.875 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.875 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.875 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.875 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.875 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.440 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:16:56.440 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:16:57.373 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.373 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:16:57.373 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.373 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.373 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.374 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.374 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.374 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.632 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:57.632 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.632 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.632 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:57.632 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.632 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.632 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.632 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.632 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.632 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.632 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.632 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.632 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.199 00:16:58.199 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.199 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.199 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.764 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.764 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.764 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.764 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.764 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.764 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.764 { 00:16:58.764 "cntlid": 125, 00:16:58.764 "qid": 0, 00:16:58.764 "state": "enabled", 00:16:58.764 "thread": "nvmf_tgt_poll_group_000", 00:16:58.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:16:58.764 "listen_address": { 00:16:58.764 "trtype": "TCP", 00:16:58.764 "adrfam": "IPv4", 00:16:58.764 "traddr": "10.0.0.2", 00:16:58.764 "trsvcid": "4420" 00:16:58.764 }, 00:16:58.764 "peer_address": { 00:16:58.764 "trtype": "TCP", 00:16:58.764 "adrfam": "IPv4", 00:16:58.764 "traddr": "10.0.0.1", 00:16:58.764 "trsvcid": "55844" 00:16:58.764 }, 00:16:58.764 "auth": { 00:16:58.764 "state": "completed", 00:16:58.764 "digest": "sha512", 00:16:58.764 "dhgroup": "ffdhe4096" 00:16:58.764 } 00:16:58.764 } 00:16:58.764 ]' 00:16:58.764 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.764 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.764 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.764 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.765 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.765 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.765 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.765 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.331 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:16:59.331 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:17:00.265 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.265 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:00.265 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.265 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.265 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.265 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.265 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.265 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.832 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:00.832 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.832 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.832 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:00.832 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.832 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.832 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:17:00.832 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.832 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.832 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.832 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.832 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.832 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.398 00:17:01.398 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.398 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.398 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.964 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.964 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.964 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.964 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.964 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.964 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.964 { 00:17:01.964 "cntlid": 127, 00:17:01.964 "qid": 0, 00:17:01.964 "state": "enabled", 00:17:01.964 "thread": "nvmf_tgt_poll_group_000", 00:17:01.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:01.964 "listen_address": { 00:17:01.964 "trtype": "TCP", 00:17:01.964 "adrfam": "IPv4", 00:17:01.964 "traddr": "10.0.0.2", 00:17:01.964 "trsvcid": "4420" 00:17:01.964 }, 00:17:01.964 "peer_address": { 00:17:01.964 "trtype": "TCP", 00:17:01.964 "adrfam": "IPv4", 00:17:01.964 "traddr": "10.0.0.1", 00:17:01.964 "trsvcid": "55882" 00:17:01.964 }, 00:17:01.964 "auth": { 00:17:01.964 "state": "completed", 00:17:01.964 "digest": "sha512", 00:17:01.964 "dhgroup": "ffdhe4096" 00:17:01.964 } 00:17:01.964 } 00:17:01.964 ]' 00:17:01.964 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.964 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.964 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.964 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.964 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.222 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.222 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.222 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.481 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:17:02.481 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:17:03.416 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.416 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:03.416 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.417 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.417 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.417 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.417 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.417 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.417 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.984 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:03.984 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.984 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.984 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.984 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.984 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.984 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.984 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.984 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.984 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.984 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.984 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.984 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.552 00:17:04.552 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.552 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.552 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.810 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.810 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.810 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.810 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.810 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.810 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.810 { 00:17:04.810 "cntlid": 129, 00:17:04.810 "qid": 0, 00:17:04.810 "state": "enabled", 00:17:04.810 "thread": "nvmf_tgt_poll_group_000", 00:17:04.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:04.810 "listen_address": { 00:17:04.810 "trtype": "TCP", 00:17:04.810 "adrfam": "IPv4", 00:17:04.810 "traddr": "10.0.0.2", 00:17:04.810 "trsvcid": "4420" 00:17:04.810 }, 00:17:04.810 "peer_address": { 00:17:04.810 "trtype": "TCP", 00:17:04.810 "adrfam": "IPv4", 00:17:04.810 "traddr": "10.0.0.1", 00:17:04.810 "trsvcid": "55926" 00:17:04.810 }, 00:17:04.810 "auth": { 00:17:04.810 "state": "completed", 00:17:04.810 "digest": "sha512", 00:17:04.810 "dhgroup": "ffdhe6144" 00:17:04.810 } 00:17:04.810 } 00:17:04.810 ]' 00:17:04.810 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.068 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.068 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.069 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.069 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.069 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.069 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.069 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.327 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:17:05.327 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:17:06.260 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.260 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:06.260 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.260 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.260 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.260 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.261 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:06.261 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:06.518 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:06.518 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.518 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.518 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:06.518 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:06.518 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.518 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.518 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.518 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.777 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.777 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.777 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.777 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.344 00:17:07.344 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.344 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.344 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.911 { 00:17:07.911 "cntlid": 131, 00:17:07.911 "qid": 0, 00:17:07.911 "state": "enabled", 00:17:07.911 "thread": "nvmf_tgt_poll_group_000", 00:17:07.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:07.911 "listen_address": { 00:17:07.911 "trtype": "TCP", 00:17:07.911 "adrfam": "IPv4", 00:17:07.911 "traddr": "10.0.0.2", 00:17:07.911 "trsvcid": "4420" 00:17:07.911 }, 00:17:07.911 "peer_address": { 00:17:07.911 "trtype": "TCP", 00:17:07.911 "adrfam": "IPv4", 00:17:07.911 "traddr": "10.0.0.1", 00:17:07.911 "trsvcid": "34972" 00:17:07.911 }, 00:17:07.911 "auth": { 00:17:07.911 "state": "completed", 00:17:07.911 "digest": "sha512", 00:17:07.911 "dhgroup": "ffdhe6144" 00:17:07.911 } 00:17:07.911 } 00:17:07.911 ]' 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.911 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.478 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:17:08.478 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:17:09.044 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.044 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:09.044 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.044 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.044 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.044 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.044 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:09.044 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:09.610 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:09.610 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.610 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.610 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:09.610 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.610 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.610 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.610 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.610 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.610 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.610 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.610 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.610 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.176 00:17:10.176 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.176 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.176 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.742 { 00:17:10.742 "cntlid": 133, 00:17:10.742 "qid": 0, 00:17:10.742 "state": "enabled", 00:17:10.742 "thread": "nvmf_tgt_poll_group_000", 00:17:10.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:10.742 "listen_address": { 00:17:10.742 "trtype": "TCP", 00:17:10.742 "adrfam": "IPv4", 00:17:10.742 "traddr": "10.0.0.2", 00:17:10.742 "trsvcid": "4420" 00:17:10.742 }, 00:17:10.742 "peer_address": { 00:17:10.742 "trtype": "TCP", 00:17:10.742 "adrfam": "IPv4", 00:17:10.742 "traddr": "10.0.0.1", 00:17:10.742 "trsvcid": "35008" 00:17:10.742 }, 00:17:10.742 "auth": { 00:17:10.742 "state": "completed", 00:17:10.742 "digest": "sha512", 00:17:10.742 "dhgroup": "ffdhe6144" 00:17:10.742 } 00:17:10.742 } 00:17:10.742 ]' 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.742 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.306 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:17:11.306 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:17:12.239 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.239 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:12.239 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.239 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.239 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.239 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.239 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.239 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.497 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:12.497 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.498 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.498 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:12.498 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.498 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.498 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:17:12.498 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.498 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.498 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.498 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.498 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.498 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.064 00:17:13.064 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.064 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.064 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.630 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.630 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.630 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.630 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.630 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.630 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.630 { 00:17:13.630 "cntlid": 135, 00:17:13.630 "qid": 0, 00:17:13.630 "state": "enabled", 00:17:13.630 "thread": "nvmf_tgt_poll_group_000", 00:17:13.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:13.630 "listen_address": { 00:17:13.630 "trtype": "TCP", 00:17:13.630 "adrfam": "IPv4", 00:17:13.630 "traddr": "10.0.0.2", 00:17:13.630 "trsvcid": "4420" 00:17:13.630 }, 00:17:13.630 "peer_address": { 00:17:13.630 "trtype": "TCP", 00:17:13.630 "adrfam": "IPv4", 00:17:13.630 "traddr": "10.0.0.1", 00:17:13.630 "trsvcid": "35030" 00:17:13.630 }, 00:17:13.630 "auth": { 00:17:13.630 "state": "completed", 00:17:13.630 "digest": "sha512", 00:17:13.630 "dhgroup": "ffdhe6144" 00:17:13.630 } 00:17:13.630 } 00:17:13.630 ]' 00:17:13.630 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.630 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.630 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.886 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:13.886 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.886 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.886 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.886 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.146 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:17:14.146 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:17:15.080 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.080 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:15.080 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.080 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.080 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.080 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.080 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.080 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.080 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.337 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:15.337 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.337 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.337 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:15.337 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:15.337 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.337 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.337 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.337 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.337 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.337 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.337 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.338 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.269 00:17:16.269 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.269 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.269 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.562 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.562 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.562 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.562 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.562 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.562 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.562 { 00:17:16.562 "cntlid": 137, 00:17:16.562 "qid": 0, 00:17:16.562 "state": "enabled", 00:17:16.562 "thread": "nvmf_tgt_poll_group_000", 00:17:16.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:16.562 "listen_address": { 00:17:16.562 "trtype": "TCP", 00:17:16.562 "adrfam": "IPv4", 00:17:16.562 "traddr": "10.0.0.2", 00:17:16.562 "trsvcid": "4420" 00:17:16.562 }, 00:17:16.562 "peer_address": { 00:17:16.562 "trtype": "TCP", 00:17:16.562 "adrfam": "IPv4", 00:17:16.562 "traddr": "10.0.0.1", 00:17:16.562 "trsvcid": "35070" 00:17:16.562 }, 00:17:16.562 "auth": { 00:17:16.562 "state": "completed", 00:17:16.562 "digest": "sha512", 00:17:16.562 "dhgroup": "ffdhe8192" 00:17:16.562 } 00:17:16.562 } 00:17:16.562 ]' 00:17:16.562 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.562 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.562 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.820 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.820 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.821 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.821 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.821 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.078 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:17:17.078 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:17:18.013 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.013 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:18.013 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.013 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.013 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.013 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.013 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:18.013 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:18.271 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:18.271 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.271 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.271 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:18.271 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:18.271 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.271 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.271 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.271 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.529 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.529 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.529 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.529 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.463 00:17:19.463 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.463 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.463 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.722 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.722 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.722 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.722 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.722 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.722 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.722 { 00:17:19.722 "cntlid": 139, 00:17:19.722 "qid": 0, 00:17:19.722 "state": "enabled", 00:17:19.722 "thread": "nvmf_tgt_poll_group_000", 00:17:19.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:19.722 "listen_address": { 00:17:19.722 "trtype": "TCP", 00:17:19.722 "adrfam": "IPv4", 00:17:19.722 "traddr": "10.0.0.2", 00:17:19.722 "trsvcid": "4420" 00:17:19.722 }, 00:17:19.722 "peer_address": { 00:17:19.722 "trtype": "TCP", 00:17:19.722 "adrfam": "IPv4", 00:17:19.722 "traddr": "10.0.0.1", 00:17:19.722 "trsvcid": "48934" 00:17:19.722 }, 00:17:19.722 "auth": { 00:17:19.722 "state": "completed", 00:17:19.722 "digest": "sha512", 00:17:19.722 "dhgroup": "ffdhe8192" 00:17:19.722 } 00:17:19.722 } 00:17:19.722 ]' 00:17:19.722 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.722 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.722 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.981 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.981 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.981 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.981 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.981 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.548 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:17:20.548 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: --dhchap-ctrl-secret DHHC-1:02:NGE0Yzk5NjEwMzFjMTg0MmY5MTIyYTc0MDZhZjMzOGYwMDQwZGMyOTAxNWJlOGU2h0wHfg==: 00:17:21.113 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.113 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:21.113 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.113 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.113 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.113 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.113 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:21.113 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:21.680 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:21.680 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.680 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.680 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:21.680 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.680 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.680 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.680 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.680 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.680 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.680 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.680 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.680 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.614 00:17:22.614 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.614 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.614 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.872 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.872 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.872 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.872 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.872 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.872 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.872 { 00:17:22.872 "cntlid": 141, 00:17:22.872 "qid": 0, 00:17:22.872 "state": "enabled", 00:17:22.872 "thread": "nvmf_tgt_poll_group_000", 00:17:22.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:22.872 "listen_address": { 00:17:22.872 "trtype": "TCP", 00:17:22.872 "adrfam": "IPv4", 00:17:22.872 "traddr": "10.0.0.2", 00:17:22.872 "trsvcid": "4420" 00:17:22.872 }, 00:17:22.872 "peer_address": { 00:17:22.872 "trtype": "TCP", 00:17:22.872 "adrfam": "IPv4", 00:17:22.872 "traddr": "10.0.0.1", 00:17:22.872 "trsvcid": "48964" 00:17:22.872 }, 00:17:22.872 "auth": { 00:17:22.872 "state": "completed", 00:17:22.872 "digest": "sha512", 00:17:22.872 "dhgroup": "ffdhe8192" 00:17:22.872 } 00:17:22.872 } 00:17:22.872 ]' 00:17:22.872 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.872 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.872 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.131 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.131 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.131 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.131 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.131 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.389 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:17:23.389 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:01:MGZhNmNjMDFkNWEzN2MwODUwYTQ1MWQ1YTJmNDg3NGSpcPMo: 00:17:24.340 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.340 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:24.340 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.340 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.340 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.340 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.340 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.340 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.906 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:24.906 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.906 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.906 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:24.906 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:24.906 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.906 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:17:24.906 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.906 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.906 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.906 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:24.906 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.906 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.471 00:17:25.729 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.729 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.729 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.987 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.987 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.987 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.987 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.987 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.987 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.987 { 00:17:25.987 "cntlid": 143, 00:17:25.987 "qid": 0, 00:17:25.987 "state": "enabled", 00:17:25.987 "thread": "nvmf_tgt_poll_group_000", 00:17:25.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:25.987 "listen_address": { 00:17:25.988 "trtype": "TCP", 00:17:25.988 "adrfam": "IPv4", 00:17:25.988 "traddr": "10.0.0.2", 00:17:25.988 "trsvcid": "4420" 00:17:25.988 }, 00:17:25.988 "peer_address": { 00:17:25.988 "trtype": "TCP", 00:17:25.988 "adrfam": "IPv4", 00:17:25.988 "traddr": "10.0.0.1", 00:17:25.988 "trsvcid": "48998" 00:17:25.988 }, 00:17:25.988 "auth": { 00:17:25.988 "state": "completed", 00:17:25.988 "digest": "sha512", 00:17:25.988 "dhgroup": "ffdhe8192" 00:17:25.988 } 00:17:25.988 } 00:17:25.988 ]' 00:17:25.988 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.988 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.988 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.988 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.988 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.988 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.988 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.988 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.553 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:17:26.553 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:17:27.487 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.487 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:27.487 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.487 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.487 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.487 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:27.487 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:27.487 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:27.487 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.487 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.487 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.745 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:27.745 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.745 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.745 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:27.745 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.745 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.745 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.745 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.745 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.745 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.745 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.745 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.745 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.678 00:17:28.678 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.678 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.678 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.245 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.245 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.245 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.245 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.245 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.245 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.245 { 00:17:29.245 "cntlid": 145, 00:17:29.245 "qid": 0, 00:17:29.245 "state": "enabled", 00:17:29.245 "thread": "nvmf_tgt_poll_group_000", 00:17:29.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:29.245 "listen_address": { 00:17:29.245 "trtype": "TCP", 00:17:29.245 "adrfam": "IPv4", 00:17:29.245 "traddr": "10.0.0.2", 00:17:29.245 "trsvcid": "4420" 00:17:29.245 }, 00:17:29.245 "peer_address": { 00:17:29.245 "trtype": "TCP", 00:17:29.245 "adrfam": "IPv4", 00:17:29.245 "traddr": "10.0.0.1", 00:17:29.245 "trsvcid": "39646" 00:17:29.245 }, 00:17:29.245 "auth": { 00:17:29.245 "state": "completed", 00:17:29.245 "digest": "sha512", 00:17:29.245 "dhgroup": "ffdhe8192" 00:17:29.245 } 00:17:29.245 } 00:17:29.245 ]' 00:17:29.245 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.245 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.245 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.245 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:29.245 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.245 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.245 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.245 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.811 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:17:29.811 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:00:YzRiNzA0MWFmYzcxNTk1NmRiY2ExYTllZGY1YjI2NWU4NjEwMzM0ZDQwZmRkMTQz0J7vMQ==: --dhchap-ctrl-secret DHHC-1:03:M2IzZmY0YWViYzI0NzA2NmYwMDJmYWFkNzU2MTk4MTkxZGNlYzhkOTU2ZjkzMWQ5OTQ4YTY3ODUxZmEwZWM5MlKuho0=: 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:30.746 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:31.314 request: 00:17:31.314 { 00:17:31.314 "name": "nvme0", 00:17:31.314 "trtype": "tcp", 00:17:31.314 "traddr": "10.0.0.2", 00:17:31.314 "adrfam": "ipv4", 00:17:31.314 "trsvcid": "4420", 00:17:31.314 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:31.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:31.314 "prchk_reftag": false, 00:17:31.314 "prchk_guard": false, 00:17:31.314 "hdgst": false, 00:17:31.314 "ddgst": false, 00:17:31.314 "dhchap_key": "key2", 00:17:31.314 "allow_unrecognized_csi": false, 00:17:31.314 "method": "bdev_nvme_attach_controller", 00:17:31.314 "req_id": 1 00:17:31.314 } 00:17:31.314 Got JSON-RPC error response 00:17:31.314 response: 00:17:31.314 { 00:17:31.314 "code": -5, 00:17:31.314 "message": "Input/output error" 00:17:31.314 } 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:31.314 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.315 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:31.315 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.315 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:31.315 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:31.315 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.251 request: 00:17:32.251 { 00:17:32.251 "name": "nvme0", 00:17:32.251 "trtype": "tcp", 00:17:32.251 "traddr": "10.0.0.2", 00:17:32.251 "adrfam": "ipv4", 00:17:32.251 "trsvcid": "4420", 00:17:32.251 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:32.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:32.251 "prchk_reftag": false, 00:17:32.251 "prchk_guard": false, 00:17:32.251 "hdgst": false, 00:17:32.251 "ddgst": false, 00:17:32.251 "dhchap_key": "key1", 00:17:32.251 "dhchap_ctrlr_key": "ckey2", 00:17:32.251 "allow_unrecognized_csi": false, 00:17:32.251 "method": "bdev_nvme_attach_controller", 00:17:32.251 "req_id": 1 00:17:32.251 } 00:17:32.251 Got JSON-RPC error response 00:17:32.251 response: 00:17:32.251 { 00:17:32.251 "code": -5, 00:17:32.251 "message": "Input/output error" 00:17:32.251 } 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.251 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.218 request: 00:17:33.218 { 00:17:33.218 "name": "nvme0", 00:17:33.218 "trtype": "tcp", 00:17:33.218 "traddr": "10.0.0.2", 00:17:33.218 "adrfam": "ipv4", 00:17:33.218 "trsvcid": "4420", 00:17:33.218 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:33.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:33.218 "prchk_reftag": false, 00:17:33.218 "prchk_guard": false, 00:17:33.218 "hdgst": false, 00:17:33.218 "ddgst": false, 00:17:33.218 "dhchap_key": "key1", 00:17:33.218 "dhchap_ctrlr_key": "ckey1", 00:17:33.218 "allow_unrecognized_csi": false, 00:17:33.218 "method": "bdev_nvme_attach_controller", 00:17:33.218 "req_id": 1 00:17:33.218 } 00:17:33.218 Got JSON-RPC error response 00:17:33.218 response: 00:17:33.218 { 00:17:33.218 "code": -5, 00:17:33.218 "message": "Input/output error" 00:17:33.218 } 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67075 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67075 ']' 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67075 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67075 00:17:33.218 killing process with pid 67075 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67075' 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67075 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67075 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@324 -- # nvmfpid=70412 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # waitforlisten 70412 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70412 ']' 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.218 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70412 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70412 ']' 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.785 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.043 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:34.044 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:34.044 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:34.044 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.044 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.301 null0 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ddw 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.s2o ]] 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s2o 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oqh 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.301 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.a2o ]] 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.a2o 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.e4d 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.zvw ]] 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zvw 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.08s 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.302 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.718 nvme0n1 00:17:35.718 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.718 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.718 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.977 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.977 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.977 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.977 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.977 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.977 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.977 { 00:17:35.977 "cntlid": 1, 00:17:35.977 "qid": 0, 00:17:35.977 "state": "enabled", 00:17:35.977 "thread": "nvmf_tgt_poll_group_000", 00:17:35.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:35.977 "listen_address": { 00:17:35.977 "trtype": "TCP", 00:17:35.977 "adrfam": "IPv4", 00:17:35.977 "traddr": "10.0.0.2", 00:17:35.977 "trsvcid": "4420" 00:17:35.977 }, 00:17:35.977 "peer_address": { 00:17:35.977 "trtype": "TCP", 00:17:35.977 "adrfam": "IPv4", 00:17:35.977 "traddr": "10.0.0.1", 00:17:35.977 "trsvcid": "39722" 00:17:35.977 }, 00:17:35.977 "auth": { 00:17:35.977 "state": "completed", 00:17:35.977 "digest": "sha512", 00:17:35.977 "dhgroup": "ffdhe8192" 00:17:35.977 } 00:17:35.977 } 00:17:35.977 ]' 00:17:35.977 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.235 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.235 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.235 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.235 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.235 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.235 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.235 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.493 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:17:36.493 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:17:37.427 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.427 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:37.427 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.427 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.427 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.427 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key3 00:17:37.427 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.427 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.427 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.427 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:37.427 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:37.686 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:37.686 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:37.686 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:37.686 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:37.686 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.686 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:37.686 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.686 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.686 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.686 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.253 request: 00:17:38.253 { 00:17:38.253 "name": "nvme0", 00:17:38.253 "trtype": "tcp", 00:17:38.253 "traddr": "10.0.0.2", 00:17:38.253 "adrfam": "ipv4", 00:17:38.253 "trsvcid": "4420", 00:17:38.253 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:38.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:38.253 "prchk_reftag": false, 00:17:38.253 "prchk_guard": false, 00:17:38.253 "hdgst": false, 00:17:38.253 "ddgst": false, 00:17:38.253 "dhchap_key": "key3", 00:17:38.253 "allow_unrecognized_csi": false, 00:17:38.253 "method": "bdev_nvme_attach_controller", 00:17:38.253 "req_id": 1 00:17:38.253 } 00:17:38.253 Got JSON-RPC error response 00:17:38.253 response: 00:17:38.253 { 00:17:38.253 "code": -5, 00:17:38.253 "message": "Input/output error" 00:17:38.253 } 00:17:38.253 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:38.253 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:38.253 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:38.253 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:38.253 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:38.253 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:38.253 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:38.253 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:38.819 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:38.819 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:38.819 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:38.819 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:38.819 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.819 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:38.819 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.819 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:38.819 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.819 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.078 request: 00:17:39.078 { 00:17:39.078 "name": "nvme0", 00:17:39.078 "trtype": "tcp", 00:17:39.078 "traddr": "10.0.0.2", 00:17:39.078 "adrfam": "ipv4", 00:17:39.078 "trsvcid": "4420", 00:17:39.078 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:39.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:39.078 "prchk_reftag": false, 00:17:39.078 "prchk_guard": false, 00:17:39.078 "hdgst": false, 00:17:39.078 "ddgst": false, 00:17:39.078 "dhchap_key": "key3", 00:17:39.078 "allow_unrecognized_csi": false, 00:17:39.078 "method": "bdev_nvme_attach_controller", 00:17:39.078 "req_id": 1 00:17:39.078 } 00:17:39.078 Got JSON-RPC error response 00:17:39.078 response: 00:17:39.078 { 00:17:39.078 "code": -5, 00:17:39.078 "message": "Input/output error" 00:17:39.078 } 00:17:39.078 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:39.078 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:39.078 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:39.078 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:39.078 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:39.078 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:39.078 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:39.078 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:39.078 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:39.078 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:39.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:40.212 request: 00:17:40.212 { 00:17:40.212 "name": "nvme0", 00:17:40.212 "trtype": "tcp", 00:17:40.212 "traddr": "10.0.0.2", 00:17:40.212 "adrfam": "ipv4", 00:17:40.212 "trsvcid": "4420", 00:17:40.212 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:40.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:40.212 "prchk_reftag": false, 00:17:40.212 "prchk_guard": false, 00:17:40.212 "hdgst": false, 00:17:40.212 "ddgst": false, 00:17:40.212 "dhchap_key": "key0", 00:17:40.212 "dhchap_ctrlr_key": "key1", 00:17:40.212 "allow_unrecognized_csi": false, 00:17:40.212 "method": "bdev_nvme_attach_controller", 00:17:40.212 "req_id": 1 00:17:40.212 } 00:17:40.212 Got JSON-RPC error response 00:17:40.212 response: 00:17:40.212 { 00:17:40.212 "code": -5, 00:17:40.212 "message": "Input/output error" 00:17:40.212 } 00:17:40.212 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:40.212 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.212 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.212 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.212 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:40.212 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:40.212 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:40.779 nvme0n1 00:17:40.779 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:40.779 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:40.779 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.345 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.345 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.345 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.604 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 00:17:41.604 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.604 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.604 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.604 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:41.604 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:41.604 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:42.978 nvme0n1 00:17:42.978 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:42.978 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:42.978 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.236 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.236 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:43.236 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.236 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.236 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.236 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:43.236 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.236 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:43.495 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.495 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:17:43.495 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid 1dd592da-03b1-46ba-b90a-3aebb25e3723 -l 0 --dhchap-secret DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: --dhchap-ctrl-secret DHHC-1:03:YjUzM2EwMzBiZmQ1YmIzNWE0Mzc4Y2UzNWY1ZDdiYTViYTllMTZiOTBmZWY2NTRjZmRjNzFjNzIwYTQ4OWI0Y52KKzk=: 00:17:44.426 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:44.426 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:44.427 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:44.427 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:44.427 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:44.427 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:44.427 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:44.427 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.427 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.427 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:44.427 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:44.427 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:44.427 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:44.427 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.427 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:44.427 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.427 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:44.427 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:44.427 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:45.363 request: 00:17:45.363 { 00:17:45.363 "name": "nvme0", 00:17:45.363 "trtype": "tcp", 00:17:45.363 "traddr": "10.0.0.2", 00:17:45.363 "adrfam": "ipv4", 00:17:45.363 "trsvcid": "4420", 00:17:45.363 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:45.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723", 00:17:45.363 "prchk_reftag": false, 00:17:45.363 "prchk_guard": false, 00:17:45.363 "hdgst": false, 00:17:45.363 "ddgst": false, 00:17:45.363 "dhchap_key": "key1", 00:17:45.363 "allow_unrecognized_csi": false, 00:17:45.363 "method": "bdev_nvme_attach_controller", 00:17:45.363 "req_id": 1 00:17:45.363 } 00:17:45.363 Got JSON-RPC error response 00:17:45.363 response: 00:17:45.363 { 00:17:45.363 "code": -5, 00:17:45.363 "message": "Input/output error" 00:17:45.363 } 00:17:45.363 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:45.363 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:45.363 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:45.363 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:45.363 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:45.363 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:45.363 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:46.296 nvme0n1 00:17:46.296 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:46.296 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:46.296 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.554 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.554 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.554 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.119 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:47.119 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.119 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.119 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.119 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:47.119 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:47.119 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:47.684 nvme0n1 00:17:47.684 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:47.684 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:47.684 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.942 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.942 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.942 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: '' 2s 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: ]] 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDljZDM5OGM3NTk3MzQ4NDdlNzY3NjI2ZWQ1YTdhNWTWgwFR: 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:48.199 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: 2s 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: ]] 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YmNmMzJiZTlhYWM2NjM3MjFkZjcyYzZhMjEzZjA3MDlmZmQ1OWU5MjM1NWM5NzQyTBxnvA==: 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:50.105 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:52.636 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:53.203 nvme0n1 00:17:53.203 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.203 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.203 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.203 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.203 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.203 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.145 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:54.145 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:54.145 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.145 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.145 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:17:54.145 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.145 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.145 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.145 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:54.145 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:54.711 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:54.711 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.711 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.970 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:55.539 request: 00:17:55.539 { 00:17:55.539 "name": "nvme0", 00:17:55.539 "dhchap_key": "key1", 00:17:55.539 "dhchap_ctrlr_key": "key3", 00:17:55.539 "method": "bdev_nvme_set_keys", 00:17:55.539 "req_id": 1 00:17:55.539 } 00:17:55.539 Got JSON-RPC error response 00:17:55.539 response: 00:17:55.539 { 00:17:55.539 "code": -13, 00:17:55.539 "message": "Permission denied" 00:17:55.539 } 00:17:55.539 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:55.539 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:55.539 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:55.539 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:55.539 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:55.539 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:55.539 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.797 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:55.797 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:57.174 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:57.174 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.174 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:57.174 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:57.174 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:57.174 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.174 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.174 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.174 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:57.174 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:57.174 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:58.549 nvme0n1 00:17:58.549 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:58.549 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.549 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.549 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.549 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:58.549 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:58.549 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:58.549 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:58.549 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.549 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:58.549 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.549 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:58.549 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:59.117 request: 00:17:59.117 { 00:17:59.117 "name": "nvme0", 00:17:59.117 "dhchap_key": "key2", 00:17:59.117 "dhchap_ctrlr_key": "key0", 00:17:59.117 "method": "bdev_nvme_set_keys", 00:17:59.117 "req_id": 1 00:17:59.117 } 00:17:59.117 Got JSON-RPC error response 00:17:59.117 response: 00:17:59.117 { 00:17:59.117 "code": -13, 00:17:59.117 "message": "Permission denied" 00:17:59.117 } 00:17:59.117 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:59.117 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:59.117 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:59.117 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:59.117 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:59.117 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.117 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:59.377 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:59.377 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:00.312 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:00.312 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.312 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67099 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67099 ']' 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67099 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67099 00:18:00.878 killing process with pid 67099 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67099' 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67099 00:18:00.878 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67099 00:18:01.136 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:01.136 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # nvmfcleanup 00:18:01.136 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:18:01.136 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:01.136 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:18:01.136 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:01.136 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:01.136 rmmod nvme_tcp 00:18:01.136 rmmod nvme_fabrics 00:18:01.136 rmmod nvme_keyring 00:18:01.136 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:01.136 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:18:01.136 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:18:01.136 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # '[' -n 70412 ']' 00:18:01.137 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@333 -- # killprocess 70412 00:18:01.137 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 70412 ']' 00:18:01.137 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 70412 00:18:01.137 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:01.137 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:01.137 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70412 00:18:01.137 killing process with pid 70412 00:18:01.137 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:01.137 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:01.137 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70412' 00:18:01.137 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 70412 00:18:01.137 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 70412 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # nvmf_fini 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@264 -- # local dev 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:18:01.395 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # continue 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # continue 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@284 -- # iptr 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@538 -- # iptables-save 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@538 -- # iptables-restore 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Ddw /tmp/spdk.key-sha256.oqh /tmp/spdk.key-sha384.e4d /tmp/spdk.key-sha512.08s /tmp/spdk.key-sha512.s2o /tmp/spdk.key-sha384.a2o /tmp/spdk.key-sha256.zvw '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:18:01.396 00:18:01.396 real 3m40.307s 00:18:01.396 user 8m51.746s 00:18:01.396 sys 0m33.240s 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:01.396 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.396 ************************************ 00:18:01.396 END TEST nvmf_auth_target 00:18:01.396 ************************************ 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:01.688 ************************************ 00:18:01.688 START TEST nvmf_bdevio_no_huge 00:18:01.688 ************************************ 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:01.688 * Looking for test storage... 00:18:01.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:01.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.688 --rc genhtml_branch_coverage=1 00:18:01.688 --rc genhtml_function_coverage=1 00:18:01.688 --rc genhtml_legend=1 00:18:01.688 --rc geninfo_all_blocks=1 00:18:01.688 --rc geninfo_unexecuted_blocks=1 00:18:01.688 00:18:01.688 ' 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:01.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.688 --rc genhtml_branch_coverage=1 00:18:01.688 --rc genhtml_function_coverage=1 00:18:01.688 --rc genhtml_legend=1 00:18:01.688 --rc geninfo_all_blocks=1 00:18:01.688 --rc geninfo_unexecuted_blocks=1 00:18:01.688 00:18:01.688 ' 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:01.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.688 --rc genhtml_branch_coverage=1 00:18:01.688 --rc genhtml_function_coverage=1 00:18:01.688 --rc genhtml_legend=1 00:18:01.688 --rc geninfo_all_blocks=1 00:18:01.688 --rc geninfo_unexecuted_blocks=1 00:18:01.688 00:18:01.688 ' 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:01.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.688 --rc genhtml_branch_coverage=1 00:18:01.688 --rc genhtml_function_coverage=1 00:18:01.688 --rc genhtml_legend=1 00:18:01.688 --rc geninfo_all_blocks=1 00:18:01.688 --rc geninfo_unexecuted_blocks=1 00:18:01.688 00:18:01.688 ' 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.688 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # : 0 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:01.689 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # prepare_net_devs 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # local -g is_hw=no 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # remove_target_ns 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@276 -- # nvmf_veth_init 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@233 -- # create_target_ns 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@234 -- # create_main_bridge 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@114 -- # delete_main_bridge 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@130 -- # return 0 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:18:01.689 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@28 -- # local -g _dev 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@160 -- # set_up initiator0 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:18:01.948 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@160 -- # set_up target0 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set target0 up 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@161 -- # set_up target0_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_ns target0 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772161 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:18:01.949 10.0.0.1 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772162 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:18:01.949 10.0.0.2 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@75 -- # set_up initiator0 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@138 -- # set_up target0_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@160 -- # set_up initiator1 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:18:01.949 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@160 -- # set_up target1 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set target1 up 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@161 -- # set_up target1_br 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_ns target1 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772163 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:18:01.950 10.0.0.3 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772164 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:18:01.950 10.0.0.4 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@75 -- # set_up initiator1 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@138 -- # set_up target1_br 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@38 -- # ping_ips 2 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator0 00:18:01.950 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:01.951 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:01.951 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo initiator0 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=initiator0 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:02.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:18:02.210 00:18:02.210 --- 10.0.0.1 ping statistics --- 00:18:02.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.210 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target0 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo target0 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=target0 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:18:02.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:18:02.210 00:18:02.210 --- 10.0.0.2 ping statistics --- 00:18:02.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.210 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair++ )) 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo initiator1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=initiator1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:18:02.210 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:02.210 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:18:02.210 00:18:02.210 --- 10.0.0.3 ping statistics --- 00:18:02.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.210 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo target1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=target1 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:18:02.210 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:18:02.211 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:02.211 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:18:02.211 00:18:02.211 --- 10.0.0.4 ping statistics --- 00:18:02.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.211 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair++ )) 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # return 0 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator0 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo initiator0 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=initiator0 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo initiator1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=initiator1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target0 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo target0 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=target0 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo target1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=target1 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@324 -- # nvmfpid=71086 00:18:02.211 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # waitforlisten 71086 00:18:02.212 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71086 ']' 00:18:02.212 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.212 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.212 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.212 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.212 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:02.212 [2024-09-27 13:23:04.006371] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:18:02.212 [2024-09-27 13:23:04.006493] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:02.470 [2024-09-27 13:23:04.154195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:02.470 [2024-09-27 13:23:04.289799] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.470 [2024-09-27 13:23:04.289849] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.470 [2024-09-27 13:23:04.289863] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.470 [2024-09-27 13:23:04.289873] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.470 [2024-09-27 13:23:04.289882] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.470 [2024-09-27 13:23:04.289970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:18:02.470 [2024-09-27 13:23:04.291146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:18:02.470 [2024-09-27 13:23:04.291242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:18:02.470 [2024-09-27 13:23:04.291253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:02.470 [2024-09-27 13:23:04.313473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.403 [2024-09-27 13:23:05.129407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.403 Malloc0 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.403 [2024-09-27 13:23:05.169594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # config=() 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # local subsystem config 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # for subsystem in "${@:-1}" 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # config+=("$(cat <<-EOF 00:18:03.403 { 00:18:03.403 "params": { 00:18:03.403 "name": "Nvme$subsystem", 00:18:03.403 "trtype": "$TEST_TRANSPORT", 00:18:03.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.403 "adrfam": "ipv4", 00:18:03.403 "trsvcid": "$NVMF_PORT", 00:18:03.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.403 "hdgst": ${hdgst:-false}, 00:18:03.403 "ddgst": ${ddgst:-false} 00:18:03.403 }, 00:18:03.403 "method": "bdev_nvme_attach_controller" 00:18:03.403 } 00:18:03.403 EOF 00:18:03.403 )") 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # cat 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # jq . 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@393 -- # IFS=, 00:18:03.403 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # printf '%s\n' '{ 00:18:03.403 "params": { 00:18:03.403 "name": "Nvme1", 00:18:03.403 "trtype": "tcp", 00:18:03.403 "traddr": "10.0.0.2", 00:18:03.403 "adrfam": "ipv4", 00:18:03.403 "trsvcid": "4420", 00:18:03.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.403 "hdgst": false, 00:18:03.403 "ddgst": false 00:18:03.403 }, 00:18:03.403 "method": "bdev_nvme_attach_controller" 00:18:03.403 }' 00:18:03.403 [2024-09-27 13:23:05.233546] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:18:03.403 [2024-09-27 13:23:05.233645] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71122 ] 00:18:03.661 [2024-09-27 13:23:05.383339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:03.920 [2024-09-27 13:23:05.525576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.920 [2024-09-27 13:23:05.525738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.920 [2024-09-27 13:23:05.525742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.920 [2024-09-27 13:23:05.540476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:03.920 I/O targets: 00:18:03.920 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:03.920 00:18:03.920 00:18:03.920 CUnit - A unit testing framework for C - Version 2.1-3 00:18:03.920 http://cunit.sourceforge.net/ 00:18:03.920 00:18:03.920 00:18:03.920 Suite: bdevio tests on: Nvme1n1 00:18:03.920 Test: blockdev write read block ...passed 00:18:03.920 Test: blockdev write zeroes read block ...passed 00:18:03.920 Test: blockdev write zeroes read no split ...passed 00:18:03.920 Test: blockdev write zeroes read split ...passed 00:18:04.177 Test: blockdev write zeroes read split partial ...passed 00:18:04.177 Test: blockdev reset ...[2024-09-27 13:23:05.776241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:04.177 [2024-09-27 13:23:05.776370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97fa20 (9): Bad file descriptor 00:18:04.177 [2024-09-27 13:23:05.790583] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:04.177 passed 00:18:04.177 Test: blockdev write read 8 blocks ...passed 00:18:04.177 Test: blockdev write read size > 128k ...passed 00:18:04.177 Test: blockdev write read invalid size ...passed 00:18:04.177 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:04.177 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:04.177 Test: blockdev write read max offset ...passed 00:18:04.177 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:04.177 Test: blockdev writev readv 8 blocks ...passed 00:18:04.177 Test: blockdev writev readv 30 x 1block ...passed 00:18:04.177 Test: blockdev writev readv block ...passed 00:18:04.177 Test: blockdev writev readv size > 128k ...passed 00:18:04.177 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:04.177 Test: blockdev comparev and writev ...[2024-09-27 13:23:05.799346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.177 [2024-09-27 13:23:05.799400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.177 [2024-09-27 13:23:05.799426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.177 [2024-09-27 13:23:05.799439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.177 [2024-09-27 13:23:05.800029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.177 [2024-09-27 13:23:05.800070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:04.177 [2024-09-27 13:23:05.800093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.177 [2024-09-27 13:23:05.800105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:04.177 [2024-09-27 13:23:05.800501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.177 [2024-09-27 13:23:05.800538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:04.177 [2024-09-27 13:23:05.800560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.177 [2024-09-27 13:23:05.800573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:04.177 [2024-09-27 13:23:05.801281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.177 [2024-09-27 13:23:05.801318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:04.177 [2024-09-27 13:23:05.801346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.177 [2024-09-27 13:23:05.801361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:04.177 passed 00:18:04.177 Test: blockdev nvme passthru rw ...passed 00:18:04.177 Test: blockdev nvme passthru vendor specific ...[2024-09-27 13:23:05.802282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:04.177 [2024-09-27 13:23:05.802315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:04.178 [2024-09-27 13:23:05.802437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:04.178 [2024-09-27 13:23:05.802456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:04.178 [2024-09-27 13:23:05.802580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:04.178 [2024-09-27 13:23:05.802598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:04.178 passed 00:18:04.178 Test: blockdev nvme admin passthru ...[2024-09-27 13:23:05.802736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:04.178 [2024-09-27 13:23:05.802756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:04.178 passed 00:18:04.178 Test: blockdev copy ...passed 00:18:04.178 00:18:04.178 Run Summary: Type Total Ran Passed Failed Inactive 00:18:04.178 suites 1 1 n/a 0 0 00:18:04.178 tests 23 23 23 0 0 00:18:04.178 asserts 152 152 152 0 n/a 00:18:04.178 00:18:04.178 Elapsed time = 0.174 seconds 00:18:04.435 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.435 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.435 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:04.435 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.435 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:04.435 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:04.435 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # nvmfcleanup 00:18:04.435 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@99 -- # sync 00:18:04.435 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:04.435 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@102 -- # set +e 00:18:04.436 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:04.436 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:04.436 rmmod nvme_tcp 00:18:04.436 rmmod nvme_fabrics 00:18:04.436 rmmod nvme_keyring 00:18:04.436 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:04.436 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@106 -- # set -e 00:18:04.436 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@107 -- # return 0 00:18:04.436 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # '[' -n 71086 ']' 00:18:04.436 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@333 -- # killprocess 71086 00:18:04.436 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71086 ']' 00:18:04.436 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71086 00:18:04.436 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:04.694 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.694 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71086 00:18:04.694 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:04.694 killing process with pid 71086 00:18:04.694 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:04.694 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71086' 00:18:04.694 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71086 00:18:04.694 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71086 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # nvmf_fini 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@264 -- # local dev 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@267 -- # remove_target_ns 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@268 -- # delete_main_bridge 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:18:04.953 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # continue 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # continue 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # _dev=0 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # dev_map=() 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@284 -- # iptr 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@538 -- # iptables-save 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@538 -- # iptables-restore 00:18:05.212 ************************************ 00:18:05.212 END TEST nvmf_bdevio_no_huge 00:18:05.212 ************************************ 00:18:05.212 00:18:05.212 real 0m3.580s 00:18:05.212 user 0m11.051s 00:18:05.212 sys 0m1.407s 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:05.212 ************************************ 00:18:05.212 START TEST nvmf_tls 00:18:05.212 ************************************ 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:05.212 * Looking for test storage... 00:18:05.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:05.212 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:05.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.472 --rc genhtml_branch_coverage=1 00:18:05.472 --rc genhtml_function_coverage=1 00:18:05.472 --rc genhtml_legend=1 00:18:05.472 --rc geninfo_all_blocks=1 00:18:05.472 --rc geninfo_unexecuted_blocks=1 00:18:05.472 00:18:05.472 ' 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:05.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.472 --rc genhtml_branch_coverage=1 00:18:05.472 --rc genhtml_function_coverage=1 00:18:05.472 --rc genhtml_legend=1 00:18:05.472 --rc geninfo_all_blocks=1 00:18:05.472 --rc geninfo_unexecuted_blocks=1 00:18:05.472 00:18:05.472 ' 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:05.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.472 --rc genhtml_branch_coverage=1 00:18:05.472 --rc genhtml_function_coverage=1 00:18:05.472 --rc genhtml_legend=1 00:18:05.472 --rc geninfo_all_blocks=1 00:18:05.472 --rc geninfo_unexecuted_blocks=1 00:18:05.472 00:18:05.472 ' 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:05.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.472 --rc genhtml_branch_coverage=1 00:18:05.472 --rc genhtml_function_coverage=1 00:18:05.472 --rc genhtml_legend=1 00:18:05.472 --rc geninfo_all_blocks=1 00:18:05.472 --rc geninfo_unexecuted_blocks=1 00:18:05.472 00:18:05.472 ' 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # : 0 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:05.473 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # prepare_net_devs 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # local -g is_hw=no 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # remove_target_ns 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@276 -- # nvmf_veth_init 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@233 -- # create_target_ns 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@234 -- # create_main_bridge 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@114 -- # delete_main_bridge 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@130 -- # return 0 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@28 -- # local -g _dev 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@160 -- # set_up initiator0 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@160 -- # set_up target0 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set target0 up 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@161 -- # set_up target0_br 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_ns target0 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772161 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:18:05.473 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:18:05.474 10.0.0.1 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772162 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:18:05.474 10.0.0.2 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@75 -- # set_up initiator0 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@138 -- # set_up target0_br 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:18:05.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@160 -- # set_up initiator1 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@160 -- # set_up target1 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set target1 up 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@161 -- # set_up target1_br 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_ns target1 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772163 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:18:05.734 10.0.0.3 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772164 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:18:05.734 10.0.0.4 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@75 -- # set_up initiator1 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:18:05.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@138 -- # set_up target1_br 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@38 -- # ping_ips 2 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator0 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo initiator0 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=initiator0 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:05.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:18:05.735 00:18:05.735 --- 10.0.0.1 ping statistics --- 00:18:05.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.735 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target0 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo target0 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=target0 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:18:05.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:18:05.735 00:18:05.735 --- 10.0.0.2 ping statistics --- 00:18:05.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.735 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair++ )) 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo initiator1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=initiator1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:18:05.735 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:05.735 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:18:05.735 00:18:05.735 --- 10.0.0.3 ping statistics --- 00:18:05.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.735 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:18:05.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo target1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=target1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:18:05.736 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:05.736 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:18:05.736 00:18:05.736 --- 10.0.0.4 ping statistics --- 00:18:05.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.736 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair++ )) 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # return 0 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator0 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo initiator0 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=initiator0 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo initiator1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=initiator1 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:05.736 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target0 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo target0 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=target0 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target1 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target1 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo target1 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=target1 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@324 -- # nvmfpid=71368 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # waitforlisten 71368 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71368 ']' 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.996 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.996 [2024-09-27 13:23:07.693062] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:18:05.996 [2024-09-27 13:23:07.693152] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.996 [2024-09-27 13:23:07.834932] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.255 [2024-09-27 13:23:07.905508] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.255 [2024-09-27 13:23:07.905574] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.255 [2024-09-27 13:23:07.905589] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.255 [2024-09-27 13:23:07.905599] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.255 [2024-09-27 13:23:07.905607] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.255 [2024-09-27 13:23:07.905646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.255 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:06.255 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:06.255 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:18:06.255 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:06.255 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.255 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.255 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:06.514 true 00:18:06.515 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:06.515 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # jq -r .tls_version 00:18:06.774 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # version=0 00:18:06.774 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # [[ 0 != \0 ]] 00:18:06.774 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:07.341 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # jq -r .tls_version 00:18:07.342 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:07.600 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # version=13 00:18:07.600 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@78 -- # [[ 13 != \1\3 ]] 00:18:07.600 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:07.859 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:07.859 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # jq -r .tls_version 00:18:08.129 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # version=7 00:18:08.129 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@86 -- # [[ 7 != \7 ]] 00:18:08.129 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:08.129 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # jq -r .enable_ktls 00:18:08.415 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # ktls=false 00:18:08.415 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@93 -- # [[ false != \f\a\l\s\e ]] 00:18:08.415 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:08.673 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:08.673 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # jq -r .enable_ktls 00:18:09.241 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # ktls=true 00:18:09.241 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@101 -- # [[ true != \t\r\u\e ]] 00:18:09.241 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:09.500 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:09.500 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # jq -r .enable_ktls 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # ktls=false 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@109 -- # [[ false != \f\a\l\s\e ]] 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # local prefix key digest 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # prefix=NVMeTLSkey-1 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # key=00112233445566778899aabbccddeeff 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # digest=1 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # python - 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # local prefix key digest 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # prefix=NVMeTLSkey-1 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # key=ffeeddccbbaa99887766554433221100 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # digest=1 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # python - 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # mktemp 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # key_path=/tmp/tmp.VI4GnECQVx 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # mktemp 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key_2_path=/tmp/tmp.Th4M1tgnyN 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # chmod 0600 /tmp/tmp.VI4GnECQVx 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # chmod 0600 /tmp/tmp.Th4M1tgnyN 00:18:09.759 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:10.017 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:10.584 [2024-09-27 13:23:12.128229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:10.584 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # setup_nvmf_tgt /tmp/tmp.VI4GnECQVx 00:18:10.584 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VI4GnECQVx 00:18:10.584 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:10.844 [2024-09-27 13:23:12.447456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.844 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:11.103 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:11.361 [2024-09-27 13:23:13.039588] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:11.361 [2024-09-27 13:23:13.039911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.361 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:11.620 malloc0 00:18:11.620 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:11.879 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VI4GnECQVx 00:18:12.447 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:12.706 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.VI4GnECQVx 00:18:24.941 Initializing NVMe Controllers 00:18:24.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:24.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:24.941 Initialization complete. Launching workers. 00:18:24.941 ======================================================== 00:18:24.941 Latency(us) 00:18:24.941 Device Information : IOPS MiB/s Average min max 00:18:24.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8515.07 33.26 7519.11 1666.70 22272.27 00:18:24.941 ======================================================== 00:18:24.941 Total : 8515.07 33.26 7519.11 1666.70 22272.27 00:18:24.941 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@139 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VI4GnECQVx 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VI4GnECQVx 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71600 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71600 /var/tmp/bdevperf.sock 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71600 ']' 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.941 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.941 [2024-09-27 13:23:24.652981] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:18:24.941 [2024-09-27 13:23:24.653108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71600 ] 00:18:24.941 [2024-09-27 13:23:24.797904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.941 [2024-09-27 13:23:24.887176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.941 [2024-09-27 13:23:24.918498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:24.941 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:24.941 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:24.941 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VI4GnECQVx 00:18:24.941 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:24.941 [2024-09-27 13:23:26.365285] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.941 TLSTESTn1 00:18:24.941 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:24.941 Running I/O for 10 seconds... 00:18:35.178 3806.00 IOPS, 14.87 MiB/s 3703.50 IOPS, 14.47 MiB/s 3657.00 IOPS, 14.29 MiB/s 3548.00 IOPS, 13.86 MiB/s 3493.20 IOPS, 13.65 MiB/s 3449.83 IOPS, 13.48 MiB/s 3474.00 IOPS, 13.57 MiB/s 3520.62 IOPS, 13.75 MiB/s 3527.44 IOPS, 13.78 MiB/s 3534.10 IOPS, 13.81 MiB/s 00:18:35.178 Latency(us) 00:18:35.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.178 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:35.178 Verification LBA range: start 0x0 length 0x2000 00:18:35.178 TLSTESTn1 : 10.02 3539.95 13.83 0.00 0.00 36090.77 6791.91 35031.97 00:18:35.178 =================================================================================================================== 00:18:35.178 Total : 3539.95 13.83 0.00 0.00 36090.77 6791.91 35031.97 00:18:35.178 { 00:18:35.178 "results": [ 00:18:35.178 { 00:18:35.178 "job": "TLSTESTn1", 00:18:35.178 "core_mask": "0x4", 00:18:35.178 "workload": "verify", 00:18:35.178 "status": "finished", 00:18:35.178 "verify_range": { 00:18:35.178 "start": 0, 00:18:35.178 "length": 8192 00:18:35.178 }, 00:18:35.178 "queue_depth": 128, 00:18:35.178 "io_size": 4096, 00:18:35.178 "runtime": 10.018497, 00:18:35.178 "iops": 3539.9521505072066, 00:18:35.178 "mibps": 13.827938087918776, 00:18:35.178 "io_failed": 0, 00:18:35.178 "io_timeout": 0, 00:18:35.178 "avg_latency_us": 36090.77003119592, 00:18:35.178 "min_latency_us": 6791.912727272727, 00:18:35.178 "max_latency_us": 35031.97090909091 00:18:35.178 } 00:18:35.178 ], 00:18:35.178 "core_count": 1 00:18:35.178 } 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71600 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71600 ']' 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71600 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71600 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:35.178 killing process with pid 71600 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71600' 00:18:35.178 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.178 00:18:35.178 Latency(us) 00:18:35.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.178 =================================================================================================================== 00:18:35.178 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71600 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71600 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@142 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Th4M1tgnyN 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Th4M1tgnyN 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Th4M1tgnyN 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Th4M1tgnyN 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71741 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71741 /var/tmp/bdevperf.sock 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71741 ']' 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:35.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:35.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.179 [2024-09-27 13:23:36.927500] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:18:35.179 [2024-09-27 13:23:36.927628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71741 ] 00:18:35.437 [2024-09-27 13:23:37.068674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.437 [2024-09-27 13:23:37.127353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.437 [2024-09-27 13:23:37.156907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:35.437 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.437 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:35.437 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Th4M1tgnyN 00:18:36.003 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:36.003 [2024-09-27 13:23:37.848225] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.261 [2024-09-27 13:23:37.858978] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:36.261 [2024-09-27 13:23:37.859109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdaf1f0 (107): Transport endpoint is not connected 00:18:36.261 [2024-09-27 13:23:37.860095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdaf1f0 (9): Bad file descriptor 00:18:36.261 [2024-09-27 13:23:37.861091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:36.261 [2024-09-27 13:23:37.861112] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:36.261 [2024-09-27 13:23:37.861124] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:36.261 [2024-09-27 13:23:37.861135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:36.261 request: 00:18:36.261 { 00:18:36.261 "name": "TLSTEST", 00:18:36.261 "trtype": "tcp", 00:18:36.261 "traddr": "10.0.0.2", 00:18:36.261 "adrfam": "ipv4", 00:18:36.261 "trsvcid": "4420", 00:18:36.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.261 "prchk_reftag": false, 00:18:36.261 "prchk_guard": false, 00:18:36.261 "hdgst": false, 00:18:36.261 "ddgst": false, 00:18:36.261 "psk": "key0", 00:18:36.261 "allow_unrecognized_csi": false, 00:18:36.261 "method": "bdev_nvme_attach_controller", 00:18:36.261 "req_id": 1 00:18:36.261 } 00:18:36.261 Got JSON-RPC error response 00:18:36.261 response: 00:18:36.261 { 00:18:36.261 "code": -5, 00:18:36.261 "message": "Input/output error" 00:18:36.261 } 00:18:36.261 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71741 00:18:36.261 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71741 ']' 00:18:36.261 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71741 00:18:36.261 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:36.261 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:36.261 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71741 00:18:36.261 killing process with pid 71741 00:18:36.261 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.261 00:18:36.261 Latency(us) 00:18:36.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.261 =================================================================================================================== 00:18:36.261 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.261 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:36.261 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:36.261 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71741' 00:18:36.261 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71741 00:18:36.261 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71741 00:18:36.261 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:36.261 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@145 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.VI4GnECQVx 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.VI4GnECQVx 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:36.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.VI4GnECQVx 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VI4GnECQVx 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71768 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71768 /var/tmp/bdevperf.sock 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71768 ']' 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.262 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.519 [2024-09-27 13:23:38.150088] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:18:36.519 [2024-09-27 13:23:38.150217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71768 ] 00:18:36.519 [2024-09-27 13:23:38.290171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.519 [2024-09-27 13:23:38.348744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.778 [2024-09-27 13:23:38.377646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:36.778 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:36.778 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:36.778 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VI4GnECQVx 00:18:37.039 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:37.297 [2024-09-27 13:23:38.936132] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:37.297 [2024-09-27 13:23:38.941218] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:37.297 [2024-09-27 13:23:38.941295] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:37.297 [2024-09-27 13:23:38.941374] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:37.297 [2024-09-27 13:23:38.941940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbe1f0 (107): Transport endpoint is not connected 00:18:37.297 [2024-09-27 13:23:38.942920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbe1f0 (9): Bad file descriptor 00:18:37.297 [2024-09-27 13:23:38.943916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:37.297 [2024-09-27 13:23:38.943948] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:37.297 [2024-09-27 13:23:38.943960] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:37.297 [2024-09-27 13:23:38.943972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:37.297 request: 00:18:37.297 { 00:18:37.297 "name": "TLSTEST", 00:18:37.297 "trtype": "tcp", 00:18:37.297 "traddr": "10.0.0.2", 00:18:37.297 "adrfam": "ipv4", 00:18:37.297 "trsvcid": "4420", 00:18:37.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.297 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:37.297 "prchk_reftag": false, 00:18:37.297 "prchk_guard": false, 00:18:37.297 "hdgst": false, 00:18:37.297 "ddgst": false, 00:18:37.297 "psk": "key0", 00:18:37.297 "allow_unrecognized_csi": false, 00:18:37.297 "method": "bdev_nvme_attach_controller", 00:18:37.297 "req_id": 1 00:18:37.297 } 00:18:37.297 Got JSON-RPC error response 00:18:37.297 response: 00:18:37.297 { 00:18:37.297 "code": -5, 00:18:37.297 "message": "Input/output error" 00:18:37.297 } 00:18:37.297 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71768 00:18:37.297 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71768 ']' 00:18:37.297 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71768 00:18:37.297 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:37.297 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.297 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71768 00:18:37.297 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:37.297 killing process with pid 71768 00:18:37.297 Received shutdown signal, test time was about 10.000000 seconds 00:18:37.298 00:18:37.298 Latency(us) 00:18:37.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.298 =================================================================================================================== 00:18:37.298 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:37.298 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:37.298 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71768' 00:18:37.298 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71768 00:18:37.298 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71768 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@148 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.VI4GnECQVx 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.VI4GnECQVx 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.VI4GnECQVx 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VI4GnECQVx 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71785 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71785 /var/tmp/bdevperf.sock 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71785 ']' 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:37.556 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.556 [2024-09-27 13:23:39.221448] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:18:37.556 [2024-09-27 13:23:39.221583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71785 ] 00:18:37.556 [2024-09-27 13:23:39.361226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.815 [2024-09-27 13:23:39.419142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.815 [2024-09-27 13:23:39.447990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:37.815 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.815 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:37.815 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VI4GnECQVx 00:18:38.074 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:38.333 [2024-09-27 13:23:40.078137] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.333 [2024-09-27 13:23:40.089105] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:38.333 [2024-09-27 13:23:40.089148] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:38.333 [2024-09-27 13:23:40.089202] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:38.333 [2024-09-27 13:23:40.089850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6f1f0 (107): Transport endpoint is not connected 00:18:38.333 [2024-09-27 13:23:40.090838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6f1f0 (9): Bad file descriptor 00:18:38.334 [2024-09-27 13:23:40.091832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:38.334 [2024-09-27 13:23:40.091861] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:38.334 [2024-09-27 13:23:40.091873] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:38.334 [2024-09-27 13:23:40.091885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:38.334 request: 00:18:38.334 { 00:18:38.334 "name": "TLSTEST", 00:18:38.334 "trtype": "tcp", 00:18:38.334 "traddr": "10.0.0.2", 00:18:38.334 "adrfam": "ipv4", 00:18:38.334 "trsvcid": "4420", 00:18:38.334 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:38.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.334 "prchk_reftag": false, 00:18:38.334 "prchk_guard": false, 00:18:38.334 "hdgst": false, 00:18:38.334 "ddgst": false, 00:18:38.334 "psk": "key0", 00:18:38.334 "allow_unrecognized_csi": false, 00:18:38.334 "method": "bdev_nvme_attach_controller", 00:18:38.334 "req_id": 1 00:18:38.334 } 00:18:38.334 Got JSON-RPC error response 00:18:38.334 response: 00:18:38.334 { 00:18:38.334 "code": -5, 00:18:38.334 "message": "Input/output error" 00:18:38.334 } 00:18:38.334 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71785 00:18:38.334 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71785 ']' 00:18:38.334 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71785 00:18:38.334 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:38.334 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.334 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71785 00:18:38.334 killing process with pid 71785 00:18:38.334 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.334 00:18:38.334 Latency(us) 00:18:38.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.334 =================================================================================================================== 00:18:38.334 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:38.334 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:38.334 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:38.334 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71785' 00:18:38.334 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71785 00:18:38.334 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71785 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@151 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:38.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71806 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71806 /var/tmp/bdevperf.sock 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71806 ']' 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:38.594 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.594 [2024-09-27 13:23:40.354287] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:18:38.594 [2024-09-27 13:23:40.354383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71806 ] 00:18:38.853 [2024-09-27 13:23:40.482827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.853 [2024-09-27 13:23:40.542434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.853 [2024-09-27 13:23:40.571395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:38.853 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:38.853 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:38.853 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:39.112 [2024-09-27 13:23:40.925621] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:39.112 [2024-09-27 13:23:40.925691] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:39.112 request: 00:18:39.112 { 00:18:39.112 "name": "key0", 00:18:39.112 "path": "", 00:18:39.112 "method": "keyring_file_add_key", 00:18:39.112 "req_id": 1 00:18:39.112 } 00:18:39.112 Got JSON-RPC error response 00:18:39.112 response: 00:18:39.112 { 00:18:39.112 "code": -1, 00:18:39.112 "message": "Operation not permitted" 00:18:39.112 } 00:18:39.112 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:39.681 [2024-09-27 13:23:41.237807] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:39.681 [2024-09-27 13:23:41.237883] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:39.681 request: 00:18:39.681 { 00:18:39.681 "name": "TLSTEST", 00:18:39.681 "trtype": "tcp", 00:18:39.681 "traddr": "10.0.0.2", 00:18:39.681 "adrfam": "ipv4", 00:18:39.681 "trsvcid": "4420", 00:18:39.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.681 "prchk_reftag": false, 00:18:39.681 "prchk_guard": false, 00:18:39.681 "hdgst": false, 00:18:39.681 "ddgst": false, 00:18:39.681 "psk": "key0", 00:18:39.681 "allow_unrecognized_csi": false, 00:18:39.681 "method": "bdev_nvme_attach_controller", 00:18:39.681 "req_id": 1 00:18:39.681 } 00:18:39.681 Got JSON-RPC error response 00:18:39.681 response: 00:18:39.681 { 00:18:39.681 "code": -126, 00:18:39.681 "message": "Required key not available" 00:18:39.681 } 00:18:39.681 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71806 00:18:39.681 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71806 ']' 00:18:39.681 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71806 00:18:39.681 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:39.681 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:39.681 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71806 00:18:39.681 killing process with pid 71806 00:18:39.681 Received shutdown signal, test time was about 10.000000 seconds 00:18:39.681 00:18:39.681 Latency(us) 00:18:39.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.681 =================================================================================================================== 00:18:39.682 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71806' 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71806 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71806 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@154 -- # killprocess 71368 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71368 ']' 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71368 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71368 00:18:39.682 killing process with pid 71368 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71368' 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71368 00:18:39.682 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71368 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # local prefix key digest 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # prefix=NVMeTLSkey-1 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # digest=2 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # python - 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # mktemp 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # key_long_path=/tmp/tmp.pMg9SCUkBu 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@157 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # chmod 0600 /tmp/tmp.pMg9SCUkBu 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # nvmfappstart -m 0x2 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@324 -- # nvmfpid=71841 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # waitforlisten 71841 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71841 ']' 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.941 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.941 [2024-09-27 13:23:41.781160] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:18:39.941 [2024-09-27 13:23:41.781254] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.201 [2024-09-27 13:23:41.917823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.201 [2024-09-27 13:23:41.976720] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.201 [2024-09-27 13:23:41.976778] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.201 [2024-09-27 13:23:41.976790] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.201 [2024-09-27 13:23:41.976799] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.201 [2024-09-27 13:23:41.976807] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.201 [2024-09-27 13:23:41.976838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.201 [2024-09-27 13:23:42.006611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:40.460 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.460 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:40.460 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:18:40.460 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:40.460 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.460 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.460 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # setup_nvmf_tgt /tmp/tmp.pMg9SCUkBu 00:18:40.460 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pMg9SCUkBu 00:18:40.460 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:40.718 [2024-09-27 13:23:42.341900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.718 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:40.976 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:41.235 [2024-09-27 13:23:42.902059] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:41.235 [2024-09-27 13:23:42.902394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.235 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:41.493 malloc0 00:18:41.493 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:41.752 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pMg9SCUkBu 00:18:42.010 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pMg9SCUkBu 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pMg9SCUkBu 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71890 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71890 /var/tmp/bdevperf.sock 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71890 ']' 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:42.268 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.526 [2024-09-27 13:23:44.124146] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:18:42.526 [2024-09-27 13:23:44.124873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71890 ] 00:18:42.526 [2024-09-27 13:23:44.268272] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.526 [2024-09-27 13:23:44.327580] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.526 [2024-09-27 13:23:44.356284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:42.785 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:42.785 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:42.785 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pMg9SCUkBu 00:18:43.043 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:43.302 [2024-09-27 13:23:44.922478] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.302 TLSTESTn1 00:18:43.302 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:43.302 Running I/O for 10 seconds... 00:18:53.504 3638.00 IOPS, 14.21 MiB/s 3811.00 IOPS, 14.89 MiB/s 3794.67 IOPS, 14.82 MiB/s 3832.50 IOPS, 14.97 MiB/s 3852.00 IOPS, 15.05 MiB/s 3849.83 IOPS, 15.04 MiB/s 3864.71 IOPS, 15.10 MiB/s 3870.62 IOPS, 15.12 MiB/s 3847.11 IOPS, 15.03 MiB/s 3854.20 IOPS, 15.06 MiB/s 00:18:53.504 Latency(us) 00:18:53.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.504 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:53.504 Verification LBA range: start 0x0 length 0x2000 00:18:53.504 TLSTESTn1 : 10.02 3860.06 15.08 0.00 0.00 33098.89 6225.92 45756.04 00:18:53.504 =================================================================================================================== 00:18:53.504 Total : 3860.06 15.08 0.00 0.00 33098.89 6225.92 45756.04 00:18:53.504 { 00:18:53.504 "results": [ 00:18:53.504 { 00:18:53.504 "job": "TLSTESTn1", 00:18:53.504 "core_mask": "0x4", 00:18:53.504 "workload": "verify", 00:18:53.504 "status": "finished", 00:18:53.504 "verify_range": { 00:18:53.504 "start": 0, 00:18:53.504 "length": 8192 00:18:53.504 }, 00:18:53.504 "queue_depth": 128, 00:18:53.504 "io_size": 4096, 00:18:53.504 "runtime": 10.017208, 00:18:53.504 "iops": 3860.057612859791, 00:18:53.504 "mibps": 15.078350050233558, 00:18:53.504 "io_failed": 0, 00:18:53.504 "io_timeout": 0, 00:18:53.504 "avg_latency_us": 33098.89449768067, 00:18:53.504 "min_latency_us": 6225.92, 00:18:53.504 "max_latency_us": 45756.04363636364 00:18:53.504 } 00:18:53.504 ], 00:18:53.504 "core_count": 1 00:18:53.504 } 00:18:53.504 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:53.504 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71890 00:18:53.504 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71890 ']' 00:18:53.504 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71890 00:18:53.504 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:53.504 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:53.504 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71890 00:18:53.504 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:53.504 killing process with pid 71890 00:18:53.504 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:53.504 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71890' 00:18:53.504 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.504 00:18:53.504 Latency(us) 00:18:53.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.504 =================================================================================================================== 00:18:53.504 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.504 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71890 00:18:53.504 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71890 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # chmod 0666 /tmp/tmp.pMg9SCUkBu 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pMg9SCUkBu 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pMg9SCUkBu 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pMg9SCUkBu 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pMg9SCUkBu 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72019 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72019 /var/tmp/bdevperf.sock 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72019 ']' 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:53.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:53.762 13:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.763 [2024-09-27 13:23:55.447988] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:18:53.763 [2024-09-27 13:23:55.448118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72019 ] 00:18:54.020 [2024-09-27 13:23:55.619259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.020 [2024-09-27 13:23:55.709837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.020 [2024-09-27 13:23:55.754938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:54.952 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:54.952 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:54.952 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pMg9SCUkBu 00:18:55.209 [2024-09-27 13:23:56.881366] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pMg9SCUkBu': 0100666 00:18:55.209 [2024-09-27 13:23:56.881420] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:55.209 request: 00:18:55.209 { 00:18:55.209 "name": "key0", 00:18:55.209 "path": "/tmp/tmp.pMg9SCUkBu", 00:18:55.209 "method": "keyring_file_add_key", 00:18:55.209 "req_id": 1 00:18:55.209 } 00:18:55.209 Got JSON-RPC error response 00:18:55.209 response: 00:18:55.209 { 00:18:55.209 "code": -1, 00:18:55.209 "message": "Operation not permitted" 00:18:55.209 } 00:18:55.209 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:55.466 [2024-09-27 13:23:57.181540] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.466 [2024-09-27 13:23:57.181622] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:55.466 request: 00:18:55.466 { 00:18:55.466 "name": "TLSTEST", 00:18:55.466 "trtype": "tcp", 00:18:55.466 "traddr": "10.0.0.2", 00:18:55.466 "adrfam": "ipv4", 00:18:55.466 "trsvcid": "4420", 00:18:55.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.466 "prchk_reftag": false, 00:18:55.466 "prchk_guard": false, 00:18:55.466 "hdgst": false, 00:18:55.466 "ddgst": false, 00:18:55.466 "psk": "key0", 00:18:55.466 "allow_unrecognized_csi": false, 00:18:55.466 "method": "bdev_nvme_attach_controller", 00:18:55.466 "req_id": 1 00:18:55.466 } 00:18:55.466 Got JSON-RPC error response 00:18:55.466 response: 00:18:55.466 { 00:18:55.466 "code": -126, 00:18:55.466 "message": "Required key not available" 00:18:55.466 } 00:18:55.466 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72019 00:18:55.466 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72019 ']' 00:18:55.466 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72019 00:18:55.466 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:55.466 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:55.466 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72019 00:18:55.466 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:55.466 killing process with pid 72019 00:18:55.466 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:55.466 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72019' 00:18:55.466 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72019 00:18:55.466 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.466 00:18:55.466 Latency(us) 00:18:55.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.466 =================================================================================================================== 00:18:55.466 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:55.466 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72019 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # killprocess 71841 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71841 ']' 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71841 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71841 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71841' 00:18:55.724 killing process with pid 71841 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71841 00:18:55.724 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71841 00:18:55.983 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # nvmfappstart -m 0x2 00:18:55.983 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:18:55.983 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:55.983 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.983 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@324 -- # nvmfpid=72058 00:18:55.983 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:55.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.983 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # waitforlisten 72058 00:18:55.983 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72058 ']' 00:18:55.983 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.983 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:55.983 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.983 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:55.983 13:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.983 [2024-09-27 13:23:57.681730] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:18:55.983 [2024-09-27 13:23:57.682111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.983 [2024-09-27 13:23:57.822489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.240 [2024-09-27 13:23:57.909979] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.240 [2024-09-27 13:23:57.910048] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.241 [2024-09-27 13:23:57.910066] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.241 [2024-09-27 13:23:57.910079] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.241 [2024-09-27 13:23:57.910089] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.241 [2024-09-27 13:23:57.910134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.241 [2024-09-27 13:23:57.941324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@173 -- # NOT setup_nvmf_tgt /tmp/tmp.pMg9SCUkBu 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.pMg9SCUkBu 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.pMg9SCUkBu 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pMg9SCUkBu 00:18:57.173 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:57.430 [2024-09-27 13:23:59.153394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.430 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:57.688 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:57.946 [2024-09-27 13:23:59.665537] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:57.946 [2024-09-27 13:23:59.665813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.946 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:58.204 malloc0 00:18:58.204 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:58.770 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pMg9SCUkBu 00:18:59.027 [2024-09-27 13:24:00.651734] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pMg9SCUkBu': 0100666 00:18:59.027 [2024-09-27 13:24:00.652030] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:59.027 request: 00:18:59.028 { 00:18:59.028 "name": "key0", 00:18:59.028 "path": "/tmp/tmp.pMg9SCUkBu", 00:18:59.028 "method": "keyring_file_add_key", 00:18:59.028 "req_id": 1 00:18:59.028 } 00:18:59.028 Got JSON-RPC error response 00:18:59.028 response: 00:18:59.028 { 00:18:59.028 "code": -1, 00:18:59.028 "message": "Operation not permitted" 00:18:59.028 } 00:18:59.028 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:59.286 [2024-09-27 13:24:00.959846] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:59.286 [2024-09-27 13:24:00.959930] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:59.286 request: 00:18:59.286 { 00:18:59.286 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.286 "host": "nqn.2016-06.io.spdk:host1", 00:18:59.286 "psk": "key0", 00:18:59.286 "method": "nvmf_subsystem_add_host", 00:18:59.286 "req_id": 1 00:18:59.286 } 00:18:59.286 Got JSON-RPC error response 00:18:59.286 response: 00:18:59.286 { 00:18:59.286 "code": -32603, 00:18:59.286 "message": "Internal error" 00:18:59.286 } 00:18:59.286 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:59.286 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:59.286 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:59.286 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:59.286 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # killprocess 72058 00:18:59.286 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72058 ']' 00:18:59.286 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72058 00:18:59.286 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:59.286 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.286 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72058 00:18:59.286 killing process with pid 72058 00:18:59.286 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:59.286 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:59.286 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72058' 00:18:59.286 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72058 00:18:59.286 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72058 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # chmod 0600 /tmp/tmp.pMg9SCUkBu 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # nvmfappstart -m 0x2 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@324 -- # nvmfpid=72127 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # waitforlisten 72127 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72127 ']' 00:18:59.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.596 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.596 [2024-09-27 13:24:01.267808] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:18:59.596 [2024-09-27 13:24:01.268219] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.868 [2024-09-27 13:24:01.440821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.868 [2024-09-27 13:24:01.531888] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.869 [2024-09-27 13:24:01.531968] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.869 [2024-09-27 13:24:01.531986] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.869 [2024-09-27 13:24:01.531998] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.869 [2024-09-27 13:24:01.532009] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.869 [2024-09-27 13:24:01.532051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.869 [2024-09-27 13:24:01.579084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:00.803 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.803 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:00.803 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:19:00.803 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:00.803 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.803 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.803 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # setup_nvmf_tgt /tmp/tmp.pMg9SCUkBu 00:19:00.803 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pMg9SCUkBu 00:19:00.803 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:01.061 [2024-09-27 13:24:02.709453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.061 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:01.319 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:01.577 [2024-09-27 13:24:03.301577] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.577 [2024-09-27 13:24:03.301839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.577 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:02.143 malloc0 00:19:02.143 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:02.401 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pMg9SCUkBu 00:19:02.659 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.917 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # bdevperf_pid=72194 00:19:02.917 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@183 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:02.917 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.917 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # waitforlisten 72194 /var/tmp/bdevperf.sock 00:19:02.917 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72194 ']' 00:19:02.917 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.917 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.917 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.917 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.917 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.917 [2024-09-27 13:24:04.724607] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:02.917 [2024-09-27 13:24:04.725067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72194 ] 00:19:03.175 [2024-09-27 13:24:04.870071] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.175 [2024-09-27 13:24:04.928832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.175 [2024-09-27 13:24:04.957833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:04.108 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:04.108 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:04.108 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pMg9SCUkBu 00:19:04.365 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:04.931 [2024-09-27 13:24:06.596456] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.931 TLSTESTn1 00:19:04.931 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:05.189 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # tgtconf='{ 00:19:05.189 "subsystems": [ 00:19:05.189 { 00:19:05.189 "subsystem": "keyring", 00:19:05.189 "config": [ 00:19:05.189 { 00:19:05.189 "method": "keyring_file_add_key", 00:19:05.189 "params": { 00:19:05.189 "name": "key0", 00:19:05.189 "path": "/tmp/tmp.pMg9SCUkBu" 00:19:05.189 } 00:19:05.189 } 00:19:05.189 ] 00:19:05.189 }, 00:19:05.189 { 00:19:05.189 "subsystem": "iobuf", 00:19:05.189 "config": [ 00:19:05.189 { 00:19:05.189 "method": "iobuf_set_options", 00:19:05.189 "params": { 00:19:05.189 "small_pool_count": 8192, 00:19:05.189 "large_pool_count": 1024, 00:19:05.189 "small_bufsize": 8192, 00:19:05.189 "large_bufsize": 135168 00:19:05.189 } 00:19:05.189 } 00:19:05.189 ] 00:19:05.189 }, 00:19:05.189 { 00:19:05.189 "subsystem": "sock", 00:19:05.189 "config": [ 00:19:05.189 { 00:19:05.189 "method": "sock_set_default_impl", 00:19:05.189 "params": { 00:19:05.189 "impl_name": "uring" 00:19:05.189 } 00:19:05.189 }, 00:19:05.189 { 00:19:05.189 "method": "sock_impl_set_options", 00:19:05.189 "params": { 00:19:05.189 "impl_name": "ssl", 00:19:05.189 "recv_buf_size": 4096, 00:19:05.189 "send_buf_size": 4096, 00:19:05.189 "enable_recv_pipe": true, 00:19:05.189 "enable_quickack": false, 00:19:05.189 "enable_placement_id": 0, 00:19:05.189 "enable_zerocopy_send_server": true, 00:19:05.189 "enable_zerocopy_send_client": false, 00:19:05.189 "zerocopy_threshold": 0, 00:19:05.189 "tls_version": 0, 00:19:05.189 "enable_ktls": false 00:19:05.189 } 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "method": "sock_impl_set_options", 00:19:05.190 "params": { 00:19:05.190 "impl_name": "posix", 00:19:05.190 "recv_buf_size": 2097152, 00:19:05.190 "send_buf_size": 2097152, 00:19:05.190 "enable_recv_pipe": true, 00:19:05.190 "enable_quickack": false, 00:19:05.190 "enable_placement_id": 0, 00:19:05.190 "enable_zerocopy_send_server": true, 00:19:05.190 "enable_zerocopy_send_client": false, 00:19:05.190 "zerocopy_threshold": 0, 00:19:05.190 "tls_version": 0, 00:19:05.190 "enable_ktls": false 00:19:05.190 } 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "method": "sock_impl_set_options", 00:19:05.190 "params": { 00:19:05.190 "impl_name": "uring", 00:19:05.190 "recv_buf_size": 2097152, 00:19:05.190 "send_buf_size": 2097152, 00:19:05.190 "enable_recv_pipe": true, 00:19:05.190 "enable_quickack": false, 00:19:05.190 "enable_placement_id": 0, 00:19:05.190 "enable_zerocopy_send_server": false, 00:19:05.190 "enable_zerocopy_send_client": false, 00:19:05.190 "zerocopy_threshold": 0, 00:19:05.190 "tls_version": 0, 00:19:05.190 "enable_ktls": false 00:19:05.190 } 00:19:05.190 } 00:19:05.190 ] 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "subsystem": "vmd", 00:19:05.190 "config": [] 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "subsystem": "accel", 00:19:05.190 "config": [ 00:19:05.190 { 00:19:05.190 "method": "accel_set_options", 00:19:05.190 "params": { 00:19:05.190 "small_cache_size": 128, 00:19:05.190 "large_cache_size": 16, 00:19:05.190 "task_count": 2048, 00:19:05.190 "sequence_count": 2048, 00:19:05.190 "buf_count": 2048 00:19:05.190 } 00:19:05.190 } 00:19:05.190 ] 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "subsystem": "bdev", 00:19:05.190 "config": [ 00:19:05.190 { 00:19:05.190 "method": "bdev_set_options", 00:19:05.190 "params": { 00:19:05.190 "bdev_io_pool_size": 65535, 00:19:05.190 "bdev_io_cache_size": 256, 00:19:05.190 "bdev_auto_examine": true, 00:19:05.190 "iobuf_small_cache_size": 128, 00:19:05.190 "iobuf_large_cache_size": 16 00:19:05.190 } 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "method": "bdev_raid_set_options", 00:19:05.190 "params": { 00:19:05.190 "process_window_size_kb": 1024, 00:19:05.190 "process_max_bandwidth_mb_sec": 0 00:19:05.190 } 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "method": "bdev_iscsi_set_options", 00:19:05.190 "params": { 00:19:05.190 "timeout_sec": 30 00:19:05.190 } 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "method": "bdev_nvme_set_options", 00:19:05.190 "params": { 00:19:05.190 "action_on_timeout": "none", 00:19:05.190 "timeout_us": 0, 00:19:05.190 "timeout_admin_us": 0, 00:19:05.190 "keep_alive_timeout_ms": 10000, 00:19:05.190 "arbitration_burst": 0, 00:19:05.190 "low_priority_weight": 0, 00:19:05.190 "medium_priority_weight": 0, 00:19:05.190 "high_priority_weight": 0, 00:19:05.190 "nvme_adminq_poll_period_us": 10000, 00:19:05.190 "nvme_ioq_poll_period_us": 0, 00:19:05.190 "io_queue_requests": 0, 00:19:05.190 "delay_cmd_submit": true, 00:19:05.190 "transport_retry_count": 4, 00:19:05.190 "bdev_retry_count": 3, 00:19:05.190 "transport_ack_timeout": 0, 00:19:05.190 "ctrlr_loss_timeout_sec": 0, 00:19:05.190 "reconnect_delay_sec": 0, 00:19:05.190 "fast_io_fail_timeout_sec": 0, 00:19:05.190 "disable_auto_failback": false, 00:19:05.190 "generate_uuids": false, 00:19:05.190 "transport_tos": 0, 00:19:05.190 "nvme_error_stat": false, 00:19:05.190 "rdma_srq_size": 0, 00:19:05.190 "io_path_stat": false, 00:19:05.190 "allow_accel_sequence": false, 00:19:05.190 "rdma_max_cq_size": 0, 00:19:05.190 "rdma_cm_event_timeout_ms": 0, 00:19:05.190 "dhchap_digests": [ 00:19:05.190 "sha256", 00:19:05.190 "sha384", 00:19:05.190 "sha512" 00:19:05.190 ], 00:19:05.190 "dhchap_dhgroups": [ 00:19:05.190 "null", 00:19:05.190 "ffdhe2048", 00:19:05.190 "ffdhe3072", 00:19:05.190 "ffdhe4096", 00:19:05.190 "ffdhe6144", 00:19:05.190 "ffdhe8192" 00:19:05.190 ] 00:19:05.190 } 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "method": "bdev_nvme_set_hotplug", 00:19:05.190 "params": { 00:19:05.190 "period_us": 100000, 00:19:05.190 "enable": false 00:19:05.190 } 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "method": "bdev_malloc_create", 00:19:05.190 "params": { 00:19:05.190 "name": "malloc0", 00:19:05.190 "num_blocks": 8192, 00:19:05.190 "block_size": 4096, 00:19:05.190 "physical_block_size": 4096, 00:19:05.190 "uuid": "fdea0478-a0c8-4529-ab17-ef72ad843329", 00:19:05.190 "optimal_io_boundary": 0, 00:19:05.190 "md_size": 0, 00:19:05.190 "dif_type": 0, 00:19:05.190 "dif_is_head_of_md": false, 00:19:05.190 "dif_pi_format": 0 00:19:05.190 } 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "method": "bdev_wait_for_examine" 00:19:05.190 } 00:19:05.190 ] 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "subsystem": "nbd", 00:19:05.190 "config": [] 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "subsystem": "scheduler", 00:19:05.190 "config": [ 00:19:05.190 { 00:19:05.190 "method": "framework_set_scheduler", 00:19:05.190 "params": { 00:19:05.190 "name": "static" 00:19:05.190 } 00:19:05.190 } 00:19:05.190 ] 00:19:05.190 }, 00:19:05.190 { 00:19:05.190 "subsystem": "nvmf", 00:19:05.190 "config": [ 00:19:05.190 { 00:19:05.190 "method": "nvmf_set_config", 00:19:05.190 "params": { 00:19:05.190 "discovery_filter": "match_any", 00:19:05.191 "admin_cmd_passthru": { 00:19:05.191 "identify_ctrlr": false 00:19:05.191 }, 00:19:05.191 "dhchap_digests": [ 00:19:05.191 "sha256", 00:19:05.191 "sha384", 00:19:05.191 "sha512" 00:19:05.191 ], 00:19:05.191 "dhchap_dhgroups": [ 00:19:05.191 "null", 00:19:05.191 "ffdhe2048", 00:19:05.191 "ffdhe3072", 00:19:05.191 "ffdhe4096", 00:19:05.191 "ffdhe6144", 00:19:05.191 "ffdhe8192" 00:19:05.191 ] 00:19:05.191 } 00:19:05.191 }, 00:19:05.191 { 00:19:05.191 "method": "nvmf_set_max_subsystems", 00:19:05.191 "params": { 00:19:05.191 "max_subsystems": 1024 00:19:05.191 } 00:19:05.191 }, 00:19:05.191 { 00:19:05.191 "method": "nvmf_set_crdt", 00:19:05.191 "params": { 00:19:05.191 "crdt1": 0, 00:19:05.191 "crdt2": 0, 00:19:05.191 "crdt3": 0 00:19:05.191 } 00:19:05.191 }, 00:19:05.191 { 00:19:05.191 "method": "nvmf_create_transport", 00:19:05.191 "params": { 00:19:05.191 "trtype": "TCP", 00:19:05.191 "max_queue_depth": 128, 00:19:05.191 "max_io_qpairs_per_ctrlr": 127, 00:19:05.191 "in_capsule_data_size": 4096, 00:19:05.191 "max_io_size": 131072, 00:19:05.191 "io_unit_size": 131072, 00:19:05.191 "max_aq_depth": 128, 00:19:05.191 "num_shared_buffers": 511, 00:19:05.191 "buf_cache_size": 4294967295, 00:19:05.191 "dif_insert_or_strip": false, 00:19:05.191 "zcopy": false, 00:19:05.191 "c2h_success": false, 00:19:05.191 "sock_priority": 0, 00:19:05.191 "abort_timeout_sec": 1, 00:19:05.191 "ack_timeout": 0, 00:19:05.191 "data_wr_pool_size": 0 00:19:05.191 } 00:19:05.191 }, 00:19:05.191 { 00:19:05.191 "method": "nvmf_create_subsystem", 00:19:05.191 "params": { 00:19:05.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.191 "allow_any_host": false, 00:19:05.191 "serial_number": "SPDK00000000000001", 00:19:05.191 "model_number": "SPDK bdev Controller", 00:19:05.191 "max_namespaces": 10, 00:19:05.191 "min_cntlid": 1, 00:19:05.191 "max_cntlid": 65519, 00:19:05.191 "ana_reporting": false 00:19:05.191 } 00:19:05.191 }, 00:19:05.191 { 00:19:05.191 "method": "nvmf_subsystem_add_host", 00:19:05.191 "params": { 00:19:05.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.191 "host": "nqn.2016-06.io.spdk:host1", 00:19:05.191 "psk": "key0" 00:19:05.191 } 00:19:05.191 }, 00:19:05.191 { 00:19:05.191 "method": "nvmf_subsystem_add_ns", 00:19:05.191 "params": { 00:19:05.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.191 "namespace": { 00:19:05.191 "nsid": 1, 00:19:05.191 "bdev_name": "malloc0", 00:19:05.191 "nguid": "FDEA0478A0C84529AB17EF72AD843329", 00:19:05.191 "uuid": "fdea0478-a0c8-4529-ab17-ef72ad843329", 00:19:05.191 "no_auto_visible": false 00:19:05.191 } 00:19:05.191 } 00:19:05.191 }, 00:19:05.191 { 00:19:05.191 "method": "nvmf_subsystem_add_listener", 00:19:05.191 "params": { 00:19:05.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.191 "listen_address": { 00:19:05.191 "trtype": "TCP", 00:19:05.191 "adrfam": "IPv4", 00:19:05.191 "traddr": "10.0.0.2", 00:19:05.191 "trsvcid": "4420" 00:19:05.191 }, 00:19:05.191 "secure_channel": true 00:19:05.191 } 00:19:05.191 } 00:19:05.191 ] 00:19:05.191 } 00:19:05.191 ] 00:19:05.191 }' 00:19:05.191 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:05.758 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # bdevperfconf='{ 00:19:05.758 "subsystems": [ 00:19:05.758 { 00:19:05.758 "subsystem": "keyring", 00:19:05.758 "config": [ 00:19:05.758 { 00:19:05.758 "method": "keyring_file_add_key", 00:19:05.758 "params": { 00:19:05.758 "name": "key0", 00:19:05.758 "path": "/tmp/tmp.pMg9SCUkBu" 00:19:05.758 } 00:19:05.758 } 00:19:05.758 ] 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "subsystem": "iobuf", 00:19:05.758 "config": [ 00:19:05.758 { 00:19:05.758 "method": "iobuf_set_options", 00:19:05.758 "params": { 00:19:05.758 "small_pool_count": 8192, 00:19:05.758 "large_pool_count": 1024, 00:19:05.758 "small_bufsize": 8192, 00:19:05.758 "large_bufsize": 135168 00:19:05.758 } 00:19:05.758 } 00:19:05.758 ] 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "subsystem": "sock", 00:19:05.758 "config": [ 00:19:05.758 { 00:19:05.758 "method": "sock_set_default_impl", 00:19:05.758 "params": { 00:19:05.758 "impl_name": "uring" 00:19:05.758 } 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "method": "sock_impl_set_options", 00:19:05.758 "params": { 00:19:05.758 "impl_name": "ssl", 00:19:05.758 "recv_buf_size": 4096, 00:19:05.758 "send_buf_size": 4096, 00:19:05.758 "enable_recv_pipe": true, 00:19:05.758 "enable_quickack": false, 00:19:05.758 "enable_placement_id": 0, 00:19:05.758 "enable_zerocopy_send_server": true, 00:19:05.758 "enable_zerocopy_send_client": false, 00:19:05.758 "zerocopy_threshold": 0, 00:19:05.758 "tls_version": 0, 00:19:05.758 "enable_ktls": false 00:19:05.758 } 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "method": "sock_impl_set_options", 00:19:05.758 "params": { 00:19:05.758 "impl_name": "posix", 00:19:05.758 "recv_buf_size": 2097152, 00:19:05.758 "send_buf_size": 2097152, 00:19:05.758 "enable_recv_pipe": true, 00:19:05.758 "enable_quickack": false, 00:19:05.758 "enable_placement_id": 0, 00:19:05.758 "enable_zerocopy_send_server": true, 00:19:05.758 "enable_zerocopy_send_client": false, 00:19:05.758 "zerocopy_threshold": 0, 00:19:05.758 "tls_version": 0, 00:19:05.758 "enable_ktls": false 00:19:05.758 } 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "method": "sock_impl_set_options", 00:19:05.758 "params": { 00:19:05.758 "impl_name": "uring", 00:19:05.758 "recv_buf_size": 2097152, 00:19:05.758 "send_buf_size": 2097152, 00:19:05.758 "enable_recv_pipe": true, 00:19:05.758 "enable_quickack": false, 00:19:05.758 "enable_placement_id": 0, 00:19:05.758 "enable_zerocopy_send_server": false, 00:19:05.758 "enable_zerocopy_send_client": false, 00:19:05.758 "zerocopy_threshold": 0, 00:19:05.758 "tls_version": 0, 00:19:05.758 "enable_ktls": false 00:19:05.758 } 00:19:05.758 } 00:19:05.758 ] 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "subsystem": "vmd", 00:19:05.758 "config": [] 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "subsystem": "accel", 00:19:05.758 "config": [ 00:19:05.758 { 00:19:05.758 "method": "accel_set_options", 00:19:05.758 "params": { 00:19:05.758 "small_cache_size": 128, 00:19:05.758 "large_cache_size": 16, 00:19:05.758 "task_count": 2048, 00:19:05.758 "sequence_count": 2048, 00:19:05.758 "buf_count": 2048 00:19:05.758 } 00:19:05.758 } 00:19:05.758 ] 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "subsystem": "bdev", 00:19:05.758 "config": [ 00:19:05.758 { 00:19:05.758 "method": "bdev_set_options", 00:19:05.758 "params": { 00:19:05.758 "bdev_io_pool_size": 65535, 00:19:05.758 "bdev_io_cache_size": 256, 00:19:05.758 "bdev_auto_examine": true, 00:19:05.758 "iobuf_small_cache_size": 128, 00:19:05.758 "iobuf_large_cache_size": 16 00:19:05.758 } 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "method": "bdev_raid_set_options", 00:19:05.758 "params": { 00:19:05.758 "process_window_size_kb": 1024, 00:19:05.758 "process_max_bandwidth_mb_sec": 0 00:19:05.758 } 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "method": "bdev_iscsi_set_options", 00:19:05.758 "params": { 00:19:05.758 "timeout_sec": 30 00:19:05.758 } 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "method": "bdev_nvme_set_options", 00:19:05.758 "params": { 00:19:05.758 "action_on_timeout": "none", 00:19:05.758 "timeout_us": 0, 00:19:05.758 "timeout_admin_us": 0, 00:19:05.758 "keep_alive_timeout_ms": 10000, 00:19:05.758 "arbitration_burst": 0, 00:19:05.758 "low_priority_weight": 0, 00:19:05.758 "medium_priority_weight": 0, 00:19:05.758 "high_priority_weight": 0, 00:19:05.758 "nvme_adminq_poll_period_us": 10000, 00:19:05.758 "nvme_ioq_poll_period_us": 0, 00:19:05.758 "io_queue_requests": 512, 00:19:05.758 "delay_cmd_submit": true, 00:19:05.758 "transport_retry_count": 4, 00:19:05.758 "bdev_retry_count": 3, 00:19:05.758 "transport_ack_timeout": 0, 00:19:05.758 "ctrlr_loss_timeout_sec": 0, 00:19:05.758 "reconnect_delay_sec": 0, 00:19:05.758 "fast_io_fail_timeout_sec": 0, 00:19:05.758 "disable_auto_failback": false, 00:19:05.758 "generate_uuids": false, 00:19:05.758 "transport_tos": 0, 00:19:05.758 "nvme_error_stat": false, 00:19:05.758 "rdma_srq_size": 0, 00:19:05.758 "io_path_stat": false, 00:19:05.758 "allow_accel_sequence": false, 00:19:05.758 "rdma_max_cq_size": 0, 00:19:05.758 "rdma_cm_event_timeout_ms": 0, 00:19:05.758 "dhchap_digests": [ 00:19:05.758 "sha256", 00:19:05.758 "sha384", 00:19:05.758 "sha512" 00:19:05.758 ], 00:19:05.758 "dhchap_dhgroups": [ 00:19:05.758 "null", 00:19:05.758 "ffdhe2048", 00:19:05.758 "ffdhe3072", 00:19:05.758 "ffdhe4096", 00:19:05.758 "ffdhe6144", 00:19:05.758 "ffdhe8192" 00:19:05.758 ] 00:19:05.758 } 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "method": "bdev_nvme_attach_controller", 00:19:05.758 "params": { 00:19:05.758 "name": "TLSTEST", 00:19:05.758 "trtype": "TCP", 00:19:05.758 "adrfam": "IPv4", 00:19:05.758 "traddr": "10.0.0.2", 00:19:05.758 "trsvcid": "4420", 00:19:05.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.758 "prchk_reftag": false, 00:19:05.758 "prchk_guard": false, 00:19:05.758 "ctrlr_loss_timeout_sec": 0, 00:19:05.758 "reconnect_delay_sec": 0, 00:19:05.758 "fast_io_fail_timeout_sec": 0, 00:19:05.758 "psk": "key0", 00:19:05.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.758 "hdgst": false, 00:19:05.758 "ddgst": false 00:19:05.758 } 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "method": "bdev_nvme_set_hotplug", 00:19:05.758 "params": { 00:19:05.758 "period_us": 100000, 00:19:05.758 "enable": false 00:19:05.758 } 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "method": "bdev_wait_for_examine" 00:19:05.758 } 00:19:05.758 ] 00:19:05.758 }, 00:19:05.758 { 00:19:05.758 "subsystem": "nbd", 00:19:05.758 "config": [] 00:19:05.758 } 00:19:05.758 ] 00:19:05.758 }' 00:19:05.758 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # killprocess 72194 00:19:05.758 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72194 ']' 00:19:05.758 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72194 00:19:05.758 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:05.758 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.758 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72194 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:05.759 killing process with pid 72194 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72194' 00:19:05.759 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.759 00:19:05.759 Latency(us) 00:19:05.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.759 =================================================================================================================== 00:19:05.759 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72194 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72194 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # killprocess 72127 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72127 ']' 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72127 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72127 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:05.759 killing process with pid 72127 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72127' 00:19:05.759 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72127 00:19:06.017 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72127 00:19:06.017 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:06.017 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:19:06.017 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:06.017 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.017 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # echo '{ 00:19:06.017 "subsystems": [ 00:19:06.017 { 00:19:06.017 "subsystem": "keyring", 00:19:06.017 "config": [ 00:19:06.017 { 00:19:06.017 "method": "keyring_file_add_key", 00:19:06.017 "params": { 00:19:06.017 "name": "key0", 00:19:06.017 "path": "/tmp/tmp.pMg9SCUkBu" 00:19:06.017 } 00:19:06.017 } 00:19:06.017 ] 00:19:06.017 }, 00:19:06.017 { 00:19:06.017 "subsystem": "iobuf", 00:19:06.017 "config": [ 00:19:06.017 { 00:19:06.017 "method": "iobuf_set_options", 00:19:06.017 "params": { 00:19:06.017 "small_pool_count": 8192, 00:19:06.017 "large_pool_count": 1024, 00:19:06.017 "small_bufsize": 8192, 00:19:06.017 "large_bufsize": 135168 00:19:06.017 } 00:19:06.017 } 00:19:06.017 ] 00:19:06.017 }, 00:19:06.017 { 00:19:06.017 "subsystem": "sock", 00:19:06.017 "config": [ 00:19:06.017 { 00:19:06.017 "method": "sock_set_default_impl", 00:19:06.017 "params": { 00:19:06.017 "impl_name": "uring" 00:19:06.017 } 00:19:06.017 }, 00:19:06.017 { 00:19:06.018 "method": "sock_impl_set_options", 00:19:06.018 "params": { 00:19:06.018 "impl_name": "ssl", 00:19:06.018 "recv_buf_size": 4096, 00:19:06.018 "send_buf_size": 4096, 00:19:06.018 "enable_recv_pipe": true, 00:19:06.018 "enable_quickack": false, 00:19:06.018 "enable_placement_id": 0, 00:19:06.018 "enable_zerocopy_send_server": true, 00:19:06.018 "enable_zerocopy_send_client": false, 00:19:06.018 "zerocopy_threshold": 0, 00:19:06.018 "tls_version": 0, 00:19:06.018 "enable_ktls": false 00:19:06.018 } 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "method": "sock_impl_set_options", 00:19:06.018 "params": { 00:19:06.018 "impl_name": "posix", 00:19:06.018 "recv_buf_size": 2097152, 00:19:06.018 "send_buf_size": 2097152, 00:19:06.018 "enable_recv_pipe": true, 00:19:06.018 "enable_quickack": false, 00:19:06.018 "enable_placement_id": 0, 00:19:06.018 "enable_zerocopy_send_server": true, 00:19:06.018 "enable_zerocopy_send_client": false, 00:19:06.018 "zerocopy_threshold": 0, 00:19:06.018 "tls_version": 0, 00:19:06.018 "enable_ktls": false 00:19:06.018 } 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "method": "sock_impl_set_options", 00:19:06.018 "params": { 00:19:06.018 "impl_name": "uring", 00:19:06.018 "recv_buf_size": 2097152, 00:19:06.018 "send_buf_size": 2097152, 00:19:06.018 "enable_recv_pipe": true, 00:19:06.018 "enable_quickack": false, 00:19:06.018 "enable_placement_id": 0, 00:19:06.018 "enable_zerocopy_send_server": false, 00:19:06.018 "enable_zerocopy_send_client": false, 00:19:06.018 "zerocopy_threshold": 0, 00:19:06.018 "tls_version": 0, 00:19:06.018 "enable_ktls": false 00:19:06.018 } 00:19:06.018 } 00:19:06.018 ] 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "subsystem": "vmd", 00:19:06.018 "config": [] 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "subsystem": "accel", 00:19:06.018 "config": [ 00:19:06.018 { 00:19:06.018 "method": "accel_set_options", 00:19:06.018 "params": { 00:19:06.018 "small_cache_size": 128, 00:19:06.018 "large_cache_size": 16, 00:19:06.018 "task_count": 2048, 00:19:06.018 "sequence_count": 2048, 00:19:06.018 "buf_count": 2048 00:19:06.018 } 00:19:06.018 } 00:19:06.018 ] 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "subsystem": "bdev", 00:19:06.018 "config": [ 00:19:06.018 { 00:19:06.018 "method": "bdev_set_options", 00:19:06.018 "params": { 00:19:06.018 "bdev_io_pool_size": 65535, 00:19:06.018 "bdev_io_cache_size": 256, 00:19:06.018 "bdev_auto_examine": true, 00:19:06.018 "iobuf_small_cache_size": 128, 00:19:06.018 "iobuf_large_cache_size": 16 00:19:06.018 } 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "method": "bdev_raid_set_options", 00:19:06.018 "params": { 00:19:06.018 "process_window_size_kb": 1024, 00:19:06.018 "process_max_bandwidth_mb_sec": 0 00:19:06.018 } 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "method": "bdev_iscsi_set_options", 00:19:06.018 "params": { 00:19:06.018 "timeout_sec": 30 00:19:06.018 } 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "method": "bdev_nvme_set_options", 00:19:06.018 "params": { 00:19:06.018 "action_on_timeout": "none", 00:19:06.018 "timeout_us": 0, 00:19:06.018 "timeout_admin_us": 0, 00:19:06.018 "keep_alive_timeout_ms": 10000, 00:19:06.018 "arbitration_burst": 0, 00:19:06.018 "low_priority_weight": 0, 00:19:06.018 "medium_priority_weight": 0, 00:19:06.018 "high_priority_weight": 0, 00:19:06.018 "nvme_adminq_poll_period_us": 10000, 00:19:06.018 "nvme_ioq_poll_period_us": 0, 00:19:06.018 "io_queue_requests": 0, 00:19:06.018 "delay_cmd_submit": true, 00:19:06.018 "transport_retry_count": 4, 00:19:06.018 "bdev_retry_count": 3, 00:19:06.018 "transport_ack_timeout": 0, 00:19:06.018 "ctrlr_loss_timeout_sec": 0, 00:19:06.018 "reconnect_delay_sec": 0, 00:19:06.018 "fast_io_fail_timeout_sec": 0, 00:19:06.018 "disable_auto_failback": false, 00:19:06.018 "generate_uuids": false, 00:19:06.018 "transport_tos": 0, 00:19:06.018 "nvme_error_stat": false, 00:19:06.018 "rdma_srq_size": 0, 00:19:06.018 "io_path_stat": false, 00:19:06.018 "allow_accel_sequence": false, 00:19:06.018 "rdma_max_cq_size": 0, 00:19:06.018 "rdma_cm_event_timeout_ms": 0, 00:19:06.018 "dhchap_digests": [ 00:19:06.018 "sha256", 00:19:06.018 "sha384", 00:19:06.018 "sha512" 00:19:06.018 ], 00:19:06.018 "dhchap_dhgroups": [ 00:19:06.018 "null", 00:19:06.018 "ffdhe2048", 00:19:06.018 "ffdhe3072", 00:19:06.018 "ffdhe4096", 00:19:06.018 "ffdhe6144", 00:19:06.018 "ffdhe8192" 00:19:06.018 ] 00:19:06.018 } 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "method": "bdev_nvme_set_hotplug", 00:19:06.018 "params": { 00:19:06.018 "period_us": 100000, 00:19:06.018 "enable": false 00:19:06.018 } 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "method": "bdev_malloc_create", 00:19:06.018 "params": { 00:19:06.018 "name": "malloc0", 00:19:06.018 "num_blocks": 8192, 00:19:06.018 "block_size": 4096, 00:19:06.018 "physical_block_size": 4096, 00:19:06.018 "uuid": "fdea0478-a0c8-4529-ab17-ef72ad843329", 00:19:06.018 "optimal_io_boundary": 0, 00:19:06.018 "md_size": 0, 00:19:06.018 "dif_type": 0, 00:19:06.018 "dif_is_head_of_md": false, 00:19:06.018 "dif_pi_format": 0 00:19:06.018 } 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "method": "bdev_wait_for_examine" 00:19:06.018 } 00:19:06.018 ] 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "subsystem": "nbd", 00:19:06.018 "config": [] 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "subsystem": "scheduler", 00:19:06.018 "config": [ 00:19:06.018 { 00:19:06.018 "method": "framework_set_scheduler", 00:19:06.018 "params": { 00:19:06.018 "name": "static" 00:19:06.018 } 00:19:06.018 } 00:19:06.018 ] 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "subsystem": "nvmf", 00:19:06.018 "config": [ 00:19:06.018 { 00:19:06.018 "method": "nvmf_set_config", 00:19:06.018 "params": { 00:19:06.018 "discovery_filter": "match_any", 00:19:06.018 "admin_cmd_passthru": { 00:19:06.018 "identify_ctrlr": false 00:19:06.018 }, 00:19:06.018 "dhchap_digests": [ 00:19:06.018 "sha256", 00:19:06.018 "sha384", 00:19:06.018 "sha512" 00:19:06.018 ], 00:19:06.018 "dhchap_dhgroups": [ 00:19:06.018 "null", 00:19:06.018 "ffdhe2048", 00:19:06.018 "ffdhe3072", 00:19:06.018 "ffdhe4096", 00:19:06.018 "ffdhe6144", 00:19:06.018 "ffdhe8192" 00:19:06.018 ] 00:19:06.018 } 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "method": "nvmf_set_max_subsystems", 00:19:06.018 "params": { 00:19:06.018 "max_subsystems": 1024 00:19:06.018 } 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "method": "nvmf_set_crdt", 00:19:06.018 "params": { 00:19:06.018 "crdt1": 0, 00:19:06.018 "crdt2": 0, 00:19:06.018 "crdt3": 0 00:19:06.018 } 00:19:06.018 }, 00:19:06.018 { 00:19:06.018 "method": "nvmf_create_transport", 00:19:06.018 "params": { 00:19:06.018 "trtype": "TCP", 00:19:06.018 "max_queue_depth": 128, 00:19:06.018 "max_io_qpairs_per_ctrlr": 127, 00:19:06.018 "in_capsule_data_size": 4096, 00:19:06.018 "max_io_size": 131072, 00:19:06.018 "io_unit_size": 131072, 00:19:06.018 "max_aq_depth": 128, 00:19:06.018 "num_shared_buffers": 511, 00:19:06.018 "buf_cache_size": 4294967295, 00:19:06.018 "dif_insert_or_strip": false, 00:19:06.018 "zcopy": false, 00:19:06.018 "c2h_success": false, 00:19:06.019 "sock_priority": 0, 00:19:06.019 "abort_timeout_sec": 1, 00:19:06.019 "ack_timeout": 0, 00:19:06.019 "data_wr_pool_size": 0 00:19:06.019 } 00:19:06.019 }, 00:19:06.019 { 00:19:06.019 "method": "nvmf_create_subsystem", 00:19:06.019 "params": { 00:19:06.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.019 "allow_any_host": false, 00:19:06.019 "serial_number": "SPDK00000000000001", 00:19:06.019 "model_number": "SPDK bdev Controller", 00:19:06.019 "max_namespaces": 10, 00:19:06.019 "min_cntlid": 1, 00:19:06.019 "max_cntlid": 65519, 00:19:06.019 "ana_reporting": false 00:19:06.019 } 00:19:06.019 }, 00:19:06.019 { 00:19:06.019 "method": "nvmf_subsystem_add_host", 00:19:06.019 "params": { 00:19:06.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.019 "host": "nqn.2016-06.io.spdk:host1", 00:19:06.019 "psk": "key0" 00:19:06.019 } 00:19:06.019 }, 00:19:06.019 { 00:19:06.019 "method": "nvmf_subsystem_add_ns", 00:19:06.019 "params": { 00:19:06.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.019 "namespace": { 00:19:06.019 "nsid": 1, 00:19:06.019 "bdev_name": "malloc0", 00:19:06.019 "nguid": "FDEA0478A0C84529AB17EF72AD843329", 00:19:06.019 "uuid": "fdea0478-a0c8-4529-ab17-ef72ad843329", 00:19:06.019 "no_auto_visible": false 00:19:06.019 } 00:19:06.019 } 00:19:06.019 }, 00:19:06.019 { 00:19:06.019 "method": "nvmf_subsystem_add_listener", 00:19:06.019 "params": { 00:19:06.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.019 "listen_address": { 00:19:06.019 "trtype": "TCP", 00:19:06.019 "adrfam": "IPv4", 00:19:06.019 "traddr": "10.0.0.2", 00:19:06.019 "trsvcid": "4420" 00:19:06.019 }, 00:19:06.019 "secure_channel": true 00:19:06.019 } 00:19:06.019 } 00:19:06.019 ] 00:19:06.019 } 00:19:06.019 ] 00:19:06.019 }' 00:19:06.019 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@324 -- # nvmfpid=72249 00:19:06.019 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:06.019 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # waitforlisten 72249 00:19:06.019 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72249 ']' 00:19:06.019 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.019 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:06.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.019 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.019 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:06.019 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.019 [2024-09-27 13:24:07.855816] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:06.019 [2024-09-27 13:24:07.855934] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.277 [2024-09-27 13:24:08.012120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.277 [2024-09-27 13:24:08.098780] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.277 [2024-09-27 13:24:08.098855] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.277 [2024-09-27 13:24:08.098872] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.277 [2024-09-27 13:24:08.098885] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.277 [2024-09-27 13:24:08.098896] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.277 [2024-09-27 13:24:08.099012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.535 [2024-09-27 13:24:08.244218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:06.535 [2024-09-27 13:24:08.301426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.535 [2024-09-27 13:24:08.344429] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.535 [2024-09-27 13:24:08.344674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.102 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.102 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:07.102 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:19:07.102 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:07.103 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.103 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.103 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # bdevperf_pid=72281 00:19:07.103 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # waitforlisten 72281 /var/tmp/bdevperf.sock 00:19:07.103 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72281 ']' 00:19:07.103 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.103 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:07.103 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.103 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:07.103 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:07.103 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.103 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # echo '{ 00:19:07.103 "subsystems": [ 00:19:07.103 { 00:19:07.103 "subsystem": "keyring", 00:19:07.103 "config": [ 00:19:07.103 { 00:19:07.103 "method": "keyring_file_add_key", 00:19:07.103 "params": { 00:19:07.103 "name": "key0", 00:19:07.103 "path": "/tmp/tmp.pMg9SCUkBu" 00:19:07.103 } 00:19:07.103 } 00:19:07.103 ] 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "subsystem": "iobuf", 00:19:07.103 "config": [ 00:19:07.103 { 00:19:07.103 "method": "iobuf_set_options", 00:19:07.103 "params": { 00:19:07.103 "small_pool_count": 8192, 00:19:07.103 "large_pool_count": 1024, 00:19:07.103 "small_bufsize": 8192, 00:19:07.103 "large_bufsize": 135168 00:19:07.103 } 00:19:07.103 } 00:19:07.103 ] 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "subsystem": "sock", 00:19:07.103 "config": [ 00:19:07.103 { 00:19:07.103 "method": "sock_set_default_impl", 00:19:07.103 "params": { 00:19:07.103 "impl_name": "uring" 00:19:07.103 } 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "method": "sock_impl_set_options", 00:19:07.103 "params": { 00:19:07.103 "impl_name": "ssl", 00:19:07.103 "recv_buf_size": 4096, 00:19:07.103 "send_buf_size": 4096, 00:19:07.103 "enable_recv_pipe": true, 00:19:07.103 "enable_quickack": false, 00:19:07.103 "enable_placement_id": 0, 00:19:07.103 "enable_zerocopy_send_server": true, 00:19:07.103 "enable_zerocopy_send_client": false, 00:19:07.103 "zerocopy_threshold": 0, 00:19:07.103 "tls_version": 0, 00:19:07.103 "enable_ktls": false 00:19:07.103 } 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "method": "sock_impl_set_options", 00:19:07.103 "params": { 00:19:07.103 "impl_name": "posix", 00:19:07.103 "recv_buf_size": 2097152, 00:19:07.103 "send_buf_size": 2097152, 00:19:07.103 "enable_recv_pipe": true, 00:19:07.103 "enable_quickack": false, 00:19:07.103 "enable_placement_id": 0, 00:19:07.103 "enable_zerocopy_send_server": true, 00:19:07.103 "enable_zerocopy_send_client": false, 00:19:07.103 "zerocopy_threshold": 0, 00:19:07.103 "tls_version": 0, 00:19:07.103 "enable_ktls": false 00:19:07.103 } 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "method": "sock_impl_set_options", 00:19:07.103 "params": { 00:19:07.103 "impl_name": "uring", 00:19:07.103 "recv_buf_size": 2097152, 00:19:07.103 "send_buf_size": 2097152, 00:19:07.103 "enable_recv_pipe": true, 00:19:07.103 "enable_quickack": false, 00:19:07.103 "enable_placement_id": 0, 00:19:07.103 "enable_zerocopy_send_server": false, 00:19:07.103 "enable_zerocopy_send_client": false, 00:19:07.103 "zerocopy_threshold": 0, 00:19:07.103 "tls_version": 0, 00:19:07.103 "enable_ktls": false 00:19:07.103 } 00:19:07.103 } 00:19:07.103 ] 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "subsystem": "vmd", 00:19:07.103 "config": [] 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "subsystem": "accel", 00:19:07.103 "config": [ 00:19:07.103 { 00:19:07.103 "method": "accel_set_options", 00:19:07.103 "params": { 00:19:07.103 "small_cache_size": 128, 00:19:07.103 "large_cache_size": 16, 00:19:07.103 "task_count": 2048, 00:19:07.103 "sequence_count": 2048, 00:19:07.103 "buf_count": 2048 00:19:07.103 } 00:19:07.103 } 00:19:07.103 ] 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "subsystem": "bdev", 00:19:07.103 "config": [ 00:19:07.103 { 00:19:07.103 "method": "bdev_set_options", 00:19:07.103 "params": { 00:19:07.103 "bdev_io_pool_size": 65535, 00:19:07.103 "bdev_io_cache_size": 256, 00:19:07.103 "bdev_auto_examine": true, 00:19:07.103 "iobuf_small_cache_size": 128, 00:19:07.103 "iobuf_large_cache_size": 16 00:19:07.103 } 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "method": "bdev_raid_set_options", 00:19:07.103 "params": { 00:19:07.103 "process_window_size_kb": 1024, 00:19:07.103 "process_max_bandwidth_mb_sec": 0 00:19:07.103 } 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "method": "bdev_iscsi_set_options", 00:19:07.103 "params": { 00:19:07.103 "timeout_sec": 30 00:19:07.103 } 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "method": "bdev_nvme_set_options", 00:19:07.103 "params": { 00:19:07.103 "action_on_timeout": "none", 00:19:07.103 "timeout_us": 0, 00:19:07.103 "timeout_admin_us": 0, 00:19:07.103 "keep_alive_timeout_ms": 10000, 00:19:07.103 "arbitration_burst": 0, 00:19:07.103 "low_priority_weight": 0, 00:19:07.103 "medium_priority_weight": 0, 00:19:07.103 "high_priority_weight": 0, 00:19:07.103 "nvme_adminq_poll_period_us": 10000, 00:19:07.103 "nvme_ioq_poll_period_us": 0, 00:19:07.103 "io_queue_requests": 512, 00:19:07.103 "delay_cmd_submit": true, 00:19:07.103 "transport_retry_count": 4, 00:19:07.103 "bdev_retry_count": 3, 00:19:07.103 "transport_ack_timeout": 0, 00:19:07.103 "ctrlr_loss_timeout_sec": 0, 00:19:07.103 "reconnect_delay_sec": 0, 00:19:07.103 "fast_io_fail_timeout_sec": 0, 00:19:07.103 "disable_auto_failback": false, 00:19:07.103 "generate_uuids": false, 00:19:07.103 "transport_tos": 0, 00:19:07.103 "nvme_error_stat": false, 00:19:07.103 "rdma_srq_size": 0, 00:19:07.103 "io_path_stat": false, 00:19:07.103 "allow_accel_sequence": false, 00:19:07.103 "rdma_max_cq_size": 0, 00:19:07.103 "rdma_cm_event_timeout_ms": 0, 00:19:07.103 "dhchap_digests": [ 00:19:07.103 "sha256", 00:19:07.103 "sha384", 00:19:07.103 "sha512" 00:19:07.103 ], 00:19:07.103 "dhchap_dhgroups": [ 00:19:07.103 "null", 00:19:07.103 "ffdhe2048", 00:19:07.103 "ffdhe3072", 00:19:07.103 "ffdhe4096", 00:19:07.103 "ffdhe6144", 00:19:07.103 "ffdhe8192" 00:19:07.103 ] 00:19:07.103 } 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "method": "bdev_nvme_attach_controller", 00:19:07.103 "params": { 00:19:07.103 "name": "TLSTEST", 00:19:07.103 "trtype": "TCP", 00:19:07.103 "adrfam": "IPv4", 00:19:07.103 "traddr": "10.0.0.2", 00:19:07.103 "trsvcid": "4420", 00:19:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.103 "prchk_reftag": false, 00:19:07.103 "prchk_guard": false, 00:19:07.103 "ctrlr_loss_timeout_sec": 0, 00:19:07.103 "reconnect_delay_sec": 0, 00:19:07.103 "fast_io_fail_timeout_sec": 0, 00:19:07.103 "psk": "key0", 00:19:07.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.103 "hdgst": false, 00:19:07.103 "ddgst": false 00:19:07.103 } 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "method": "bdev_nvme_set_hotplug", 00:19:07.103 "params": { 00:19:07.103 "period_us": 100000, 00:19:07.103 "enable": false 00:19:07.103 } 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "method": "bdev_wait_for_examine" 00:19:07.103 } 00:19:07.103 ] 00:19:07.103 }, 00:19:07.103 { 00:19:07.103 "subsystem": "nbd", 00:19:07.103 "config": [] 00:19:07.103 } 00:19:07.103 ] 00:19:07.104 }' 00:19:07.362 [2024-09-27 13:24:08.990531] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:07.362 [2024-09-27 13:24:08.990669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72281 ] 00:19:07.362 [2024-09-27 13:24:09.137146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.362 [2024-09-27 13:24:09.197908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.620 [2024-09-27 13:24:09.310121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:07.620 [2024-09-27 13:24:09.342774] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:08.554 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:08.554 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:08.554 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:08.554 Running I/O for 10 seconds... 00:19:18.621 3760.00 IOPS, 14.69 MiB/s 3825.50 IOPS, 14.94 MiB/s 3849.33 IOPS, 15.04 MiB/s 3857.00 IOPS, 15.07 MiB/s 3856.80 IOPS, 15.07 MiB/s 3856.00 IOPS, 15.06 MiB/s 3853.29 IOPS, 15.05 MiB/s 3784.62 IOPS, 14.78 MiB/s 3751.44 IOPS, 14.65 MiB/s 3753.30 IOPS, 14.66 MiB/s 00:19:18.621 Latency(us) 00:19:18.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.621 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:18.621 Verification LBA range: start 0x0 length 0x2000 00:19:18.621 TLSTESTn1 : 10.02 3758.29 14.68 0.00 0.00 33996.13 6762.12 35508.60 00:19:18.621 =================================================================================================================== 00:19:18.621 Total : 3758.29 14.68 0.00 0.00 33996.13 6762.12 35508.60 00:19:18.621 { 00:19:18.621 "results": [ 00:19:18.621 { 00:19:18.621 "job": "TLSTESTn1", 00:19:18.621 "core_mask": "0x4", 00:19:18.621 "workload": "verify", 00:19:18.621 "status": "finished", 00:19:18.621 "verify_range": { 00:19:18.621 "start": 0, 00:19:18.621 "length": 8192 00:19:18.621 }, 00:19:18.621 "queue_depth": 128, 00:19:18.621 "io_size": 4096, 00:19:18.621 "runtime": 10.020259, 00:19:18.621 "iops": 3758.2860882138875, 00:19:18.621 "mibps": 14.680805032085498, 00:19:18.621 "io_failed": 0, 00:19:18.621 "io_timeout": 0, 00:19:18.621 "avg_latency_us": 33996.13415737878, 00:19:18.621 "min_latency_us": 6762.123636363636, 00:19:18.621 "max_latency_us": 35508.59636363637 00:19:18.621 } 00:19:18.621 ], 00:19:18.621 "core_count": 1 00:19:18.621 } 00:19:18.621 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.621 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # killprocess 72281 00:19:18.621 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72281 ']' 00:19:18.621 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72281 00:19:18.621 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:18.621 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:18.621 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72281 00:19:18.621 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:18.621 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:18.621 killing process with pid 72281 00:19:18.621 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72281' 00:19:18.621 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.621 00:19:18.621 Latency(us) 00:19:18.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.621 =================================================================================================================== 00:19:18.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.621 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72281 00:19:18.621 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72281 00:19:18.879 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@212 -- # killprocess 72249 00:19:18.879 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72249 ']' 00:19:18.879 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72249 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72249 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72249' 00:19:18.880 killing process with pid 72249 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72249 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72249 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # nvmfappstart 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@324 -- # nvmfpid=72414 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # waitforlisten 72414 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72414 ']' 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.880 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.138 [2024-09-27 13:24:20.771120] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:19.138 [2024-09-27 13:24:20.771260] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.138 [2024-09-27 13:24:20.909904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.138 [2024-09-27 13:24:20.969006] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.138 [2024-09-27 13:24:20.969077] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.138 [2024-09-27 13:24:20.969089] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.138 [2024-09-27 13:24:20.969098] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.138 [2024-09-27 13:24:20.969105] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.138 [2024-09-27 13:24:20.969137] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.396 [2024-09-27 13:24:20.999207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:19.396 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.396 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:19.396 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:19:19.396 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:19.396 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.396 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.396 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # setup_nvmf_tgt /tmp/tmp.pMg9SCUkBu 00:19:19.396 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pMg9SCUkBu 00:19:19.396 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:19.655 [2024-09-27 13:24:21.374478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.655 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:19.913 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:20.170 [2024-09-27 13:24:21.934615] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:20.170 [2024-09-27 13:24:21.935315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.170 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:20.427 malloc0 00:19:20.685 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:20.942 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pMg9SCUkBu 00:19:21.200 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:21.458 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # bdevperf_pid=72468 00:19:21.458 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:21.458 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.458 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # waitforlisten 72468 /var/tmp/bdevperf.sock 00:19:21.458 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72468 ']' 00:19:21.458 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.458 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.458 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.458 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.458 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.458 [2024-09-27 13:24:23.228325] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:21.458 [2024-09-27 13:24:23.228438] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72468 ] 00:19:21.715 [2024-09-27 13:24:23.361749] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.715 [2024-09-27 13:24:23.435540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.715 [2024-09-27 13:24:23.469918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:22.647 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.647 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:22.647 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pMg9SCUkBu 00:19:23.238 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:23.496 [2024-09-27 13:24:25.294617] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.753 nvme0n1 00:19:23.753 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:23.753 Running I/O for 1 seconds... 00:19:25.126 3712.00 IOPS, 14.50 MiB/s 00:19:25.126 Latency(us) 00:19:25.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.126 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:25.126 Verification LBA range: start 0x0 length 0x2000 00:19:25.126 nvme0n1 : 1.03 3737.95 14.60 0.00 0.00 33805.15 7268.54 23473.80 00:19:25.126 =================================================================================================================== 00:19:25.126 Total : 3737.95 14.60 0.00 0.00 33805.15 7268.54 23473.80 00:19:25.126 { 00:19:25.126 "results": [ 00:19:25.126 { 00:19:25.126 "job": "nvme0n1", 00:19:25.126 "core_mask": "0x2", 00:19:25.126 "workload": "verify", 00:19:25.126 "status": "finished", 00:19:25.126 "verify_range": { 00:19:25.126 "start": 0, 00:19:25.126 "length": 8192 00:19:25.126 }, 00:19:25.126 "queue_depth": 128, 00:19:25.126 "io_size": 4096, 00:19:25.126 "runtime": 1.027301, 00:19:25.126 "iops": 3737.9502210160413, 00:19:25.126 "mibps": 14.601368050843911, 00:19:25.126 "io_failed": 0, 00:19:25.126 "io_timeout": 0, 00:19:25.126 "avg_latency_us": 33805.15296969697, 00:19:25.126 "min_latency_us": 7268.538181818182, 00:19:25.126 "max_latency_us": 23473.803636363635 00:19:25.127 } 00:19:25.127 ], 00:19:25.127 "core_count": 1 00:19:25.127 } 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@231 -- # killprocess 72468 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72468 ']' 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72468 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72468 00:19:25.127 killing process with pid 72468 00:19:25.127 Received shutdown signal, test time was about 1.000000 seconds 00:19:25.127 00:19:25.127 Latency(us) 00:19:25.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.127 =================================================================================================================== 00:19:25.127 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72468' 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72468 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72468 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # killprocess 72414 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72414 ']' 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72414 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72414 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:25.127 killing process with pid 72414 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72414' 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72414 00:19:25.127 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72414 00:19:25.385 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # nvmfappstart 00:19:25.385 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:19:25.385 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:25.386 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.386 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@324 -- # nvmfpid=72525 00:19:25.386 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:25.386 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # waitforlisten 72525 00:19:25.386 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72525 ']' 00:19:25.386 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.386 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.386 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.386 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.386 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.386 [2024-09-27 13:24:27.108009] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:25.386 [2024-09-27 13:24:27.108340] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.644 [2024-09-27 13:24:27.267734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.644 [2024-09-27 13:24:27.351122] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.644 [2024-09-27 13:24:27.351195] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.644 [2024-09-27 13:24:27.351210] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.644 [2024-09-27 13:24:27.351223] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.644 [2024-09-27 13:24:27.351233] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.644 [2024-09-27 13:24:27.351271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.644 [2024-09-27 13:24:27.392060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:26.578 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.578 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:26.578 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:19:26.578 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:26.578 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.578 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.578 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@238 -- # rpc_cmd 00:19:26.578 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.579 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.579 [2024-09-27 13:24:28.220739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.579 malloc0 00:19:26.579 [2024-09-27 13:24:28.260621] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:26.579 [2024-09-27 13:24:28.260879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.579 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.579 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@251 -- # bdevperf_pid=72557 00:19:26.579 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@249 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:26.579 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@253 -- # waitforlisten 72557 /var/tmp/bdevperf.sock 00:19:26.579 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72557 ']' 00:19:26.579 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.579 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:26.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.579 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.579 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:26.579 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.579 [2024-09-27 13:24:28.341818] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:26.579 [2024-09-27 13:24:28.341933] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72557 ] 00:19:26.836 [2024-09-27 13:24:28.483167] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.836 [2024-09-27 13:24:28.565360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.836 [2024-09-27 13:24:28.595643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:27.772 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:27.772 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:27.772 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pMg9SCUkBu 00:19:28.030 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:28.288 [2024-09-27 13:24:30.042577] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.288 nvme0n1 00:19:28.288 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:28.546 Running I/O for 1 seconds... 00:19:29.496 3490.00 IOPS, 13.63 MiB/s 00:19:29.496 Latency(us) 00:19:29.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.496 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:29.496 Verification LBA range: start 0x0 length 0x2000 00:19:29.496 nvme0n1 : 1.02 3554.18 13.88 0.00 0.00 35664.68 6881.28 41704.73 00:19:29.496 =================================================================================================================== 00:19:29.496 Total : 3554.18 13.88 0.00 0.00 35664.68 6881.28 41704.73 00:19:29.496 { 00:19:29.496 "results": [ 00:19:29.496 { 00:19:29.496 "job": "nvme0n1", 00:19:29.496 "core_mask": "0x2", 00:19:29.496 "workload": "verify", 00:19:29.496 "status": "finished", 00:19:29.496 "verify_range": { 00:19:29.496 "start": 0, 00:19:29.496 "length": 8192 00:19:29.496 }, 00:19:29.496 "queue_depth": 128, 00:19:29.496 "io_size": 4096, 00:19:29.496 "runtime": 1.017957, 00:19:29.496 "iops": 3554.177632257551, 00:19:29.496 "mibps": 13.883506376006059, 00:19:29.496 "io_failed": 0, 00:19:29.496 "io_timeout": 0, 00:19:29.496 "avg_latency_us": 35664.67621488517, 00:19:29.496 "min_latency_us": 6881.28, 00:19:29.496 "max_latency_us": 41704.72727272727 00:19:29.496 } 00:19:29.496 ], 00:19:29.496 "core_count": 1 00:19:29.496 } 00:19:29.496 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # rpc_cmd save_config 00:19:29.496 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.496 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.782 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.782 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # tgtcfg='{ 00:19:29.782 "subsystems": [ 00:19:29.782 { 00:19:29.782 "subsystem": "keyring", 00:19:29.782 "config": [ 00:19:29.782 { 00:19:29.782 "method": "keyring_file_add_key", 00:19:29.782 "params": { 00:19:29.782 "name": "key0", 00:19:29.782 "path": "/tmp/tmp.pMg9SCUkBu" 00:19:29.782 } 00:19:29.782 } 00:19:29.782 ] 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "subsystem": "iobuf", 00:19:29.782 "config": [ 00:19:29.782 { 00:19:29.782 "method": "iobuf_set_options", 00:19:29.782 "params": { 00:19:29.782 "small_pool_count": 8192, 00:19:29.782 "large_pool_count": 1024, 00:19:29.782 "small_bufsize": 8192, 00:19:29.782 "large_bufsize": 135168 00:19:29.782 } 00:19:29.782 } 00:19:29.782 ] 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "subsystem": "sock", 00:19:29.782 "config": [ 00:19:29.782 { 00:19:29.782 "method": "sock_set_default_impl", 00:19:29.782 "params": { 00:19:29.782 "impl_name": "uring" 00:19:29.782 } 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "method": "sock_impl_set_options", 00:19:29.782 "params": { 00:19:29.782 "impl_name": "ssl", 00:19:29.782 "recv_buf_size": 4096, 00:19:29.782 "send_buf_size": 4096, 00:19:29.782 "enable_recv_pipe": true, 00:19:29.782 "enable_quickack": false, 00:19:29.782 "enable_placement_id": 0, 00:19:29.782 "enable_zerocopy_send_server": true, 00:19:29.782 "enable_zerocopy_send_client": false, 00:19:29.782 "zerocopy_threshold": 0, 00:19:29.782 "tls_version": 0, 00:19:29.782 "enable_ktls": false 00:19:29.782 } 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "method": "sock_impl_set_options", 00:19:29.782 "params": { 00:19:29.782 "impl_name": "posix", 00:19:29.782 "recv_buf_size": 2097152, 00:19:29.782 "send_buf_size": 2097152, 00:19:29.782 "enable_recv_pipe": true, 00:19:29.782 "enable_quickack": false, 00:19:29.782 "enable_placement_id": 0, 00:19:29.782 "enable_zerocopy_send_server": true, 00:19:29.782 "enable_zerocopy_send_client": false, 00:19:29.782 "zerocopy_threshold": 0, 00:19:29.782 "tls_version": 0, 00:19:29.782 "enable_ktls": false 00:19:29.782 } 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "method": "sock_impl_set_options", 00:19:29.782 "params": { 00:19:29.782 "impl_name": "uring", 00:19:29.782 "recv_buf_size": 2097152, 00:19:29.782 "send_buf_size": 2097152, 00:19:29.782 "enable_recv_pipe": true, 00:19:29.782 "enable_quickack": false, 00:19:29.782 "enable_placement_id": 0, 00:19:29.782 "enable_zerocopy_send_server": false, 00:19:29.782 "enable_zerocopy_send_client": false, 00:19:29.782 "zerocopy_threshold": 0, 00:19:29.782 "tls_version": 0, 00:19:29.782 "enable_ktls": false 00:19:29.782 } 00:19:29.782 } 00:19:29.782 ] 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "subsystem": "vmd", 00:19:29.782 "config": [] 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "subsystem": "accel", 00:19:29.782 "config": [ 00:19:29.782 { 00:19:29.782 "method": "accel_set_options", 00:19:29.782 "params": { 00:19:29.782 "small_cache_size": 128, 00:19:29.782 "large_cache_size": 16, 00:19:29.782 "task_count": 2048, 00:19:29.782 "sequence_count": 2048, 00:19:29.782 "buf_count": 2048 00:19:29.782 } 00:19:29.782 } 00:19:29.782 ] 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "subsystem": "bdev", 00:19:29.782 "config": [ 00:19:29.782 { 00:19:29.782 "method": "bdev_set_options", 00:19:29.782 "params": { 00:19:29.782 "bdev_io_pool_size": 65535, 00:19:29.782 "bdev_io_cache_size": 256, 00:19:29.782 "bdev_auto_examine": true, 00:19:29.782 "iobuf_small_cache_size": 128, 00:19:29.782 "iobuf_large_cache_size": 16 00:19:29.782 } 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "method": "bdev_raid_set_options", 00:19:29.782 "params": { 00:19:29.782 "process_window_size_kb": 1024, 00:19:29.782 "process_max_bandwidth_mb_sec": 0 00:19:29.782 } 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "method": "bdev_iscsi_set_options", 00:19:29.782 "params": { 00:19:29.782 "timeout_sec": 30 00:19:29.782 } 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "method": "bdev_nvme_set_options", 00:19:29.782 "params": { 00:19:29.782 "action_on_timeout": "none", 00:19:29.782 "timeout_us": 0, 00:19:29.782 "timeout_admin_us": 0, 00:19:29.782 "keep_alive_timeout_ms": 10000, 00:19:29.782 "arbitration_burst": 0, 00:19:29.782 "low_priority_weight": 0, 00:19:29.782 "medium_priority_weight": 0, 00:19:29.782 "high_priority_weight": 0, 00:19:29.782 "nvme_adminq_poll_period_us": 10000, 00:19:29.782 "nvme_ioq_poll_period_us": 0, 00:19:29.782 "io_queue_requests": 0, 00:19:29.782 "delay_cmd_submit": true, 00:19:29.782 "transport_retry_count": 4, 00:19:29.782 "bdev_retry_count": 3, 00:19:29.782 "transport_ack_timeout": 0, 00:19:29.782 "ctrlr_loss_timeout_sec": 0, 00:19:29.782 "reconnect_delay_sec": 0, 00:19:29.782 "fast_io_fail_timeout_sec": 0, 00:19:29.782 "disable_auto_failback": false, 00:19:29.782 "generate_uuids": false, 00:19:29.782 "transport_tos": 0, 00:19:29.782 "nvme_error_stat": false, 00:19:29.782 "rdma_srq_size": 0, 00:19:29.782 "io_path_stat": false, 00:19:29.782 "allow_accel_sequence": false, 00:19:29.782 "rdma_max_cq_size": 0, 00:19:29.782 "rdma_cm_event_timeout_ms": 0, 00:19:29.782 "dhchap_digests": [ 00:19:29.782 "sha256", 00:19:29.782 "sha384", 00:19:29.782 "sha512" 00:19:29.782 ], 00:19:29.782 "dhchap_dhgroups": [ 00:19:29.782 "null", 00:19:29.782 "ffdhe2048", 00:19:29.782 "ffdhe3072", 00:19:29.782 "ffdhe4096", 00:19:29.782 "ffdhe6144", 00:19:29.782 "ffdhe8192" 00:19:29.782 ] 00:19:29.782 } 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "method": "bdev_nvme_set_hotplug", 00:19:29.782 "params": { 00:19:29.782 "period_us": 100000, 00:19:29.782 "enable": false 00:19:29.782 } 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "method": "bdev_malloc_create", 00:19:29.782 "params": { 00:19:29.782 "name": "malloc0", 00:19:29.782 "num_blocks": 8192, 00:19:29.782 "block_size": 4096, 00:19:29.782 "physical_block_size": 4096, 00:19:29.782 "uuid": "5402df29-b37d-45ae-b5aa-39051fa67eb1", 00:19:29.782 "optimal_io_boundary": 0, 00:19:29.782 "md_size": 0, 00:19:29.782 "dif_type": 0, 00:19:29.782 "dif_is_head_of_md": false, 00:19:29.782 "dif_pi_format": 0 00:19:29.782 } 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "method": "bdev_wait_for_examine" 00:19:29.782 } 00:19:29.782 ] 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "subsystem": "nbd", 00:19:29.782 "config": [] 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "subsystem": "scheduler", 00:19:29.782 "config": [ 00:19:29.782 { 00:19:29.782 "method": "framework_set_scheduler", 00:19:29.782 "params": { 00:19:29.782 "name": "static" 00:19:29.782 } 00:19:29.782 } 00:19:29.782 ] 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "subsystem": "nvmf", 00:19:29.782 "config": [ 00:19:29.782 { 00:19:29.782 "method": "nvmf_set_config", 00:19:29.782 "params": { 00:19:29.782 "discovery_filter": "match_any", 00:19:29.782 "admin_cmd_passthru": { 00:19:29.782 "identify_ctrlr": false 00:19:29.782 }, 00:19:29.782 "dhchap_digests": [ 00:19:29.782 "sha256", 00:19:29.782 "sha384", 00:19:29.782 "sha512" 00:19:29.782 ], 00:19:29.782 "dhchap_dhgroups": [ 00:19:29.782 "null", 00:19:29.782 "ffdhe2048", 00:19:29.782 "ffdhe3072", 00:19:29.782 "ffdhe4096", 00:19:29.782 "ffdhe6144", 00:19:29.782 "ffdhe8192" 00:19:29.782 ] 00:19:29.782 } 00:19:29.782 }, 00:19:29.782 { 00:19:29.782 "method": "nvmf_set_max_subsystems", 00:19:29.782 "params": { 00:19:29.782 "max_subsystems": 1024 00:19:29.783 } 00:19:29.783 }, 00:19:29.783 { 00:19:29.783 "method": "nvmf_set_crdt", 00:19:29.783 "params": { 00:19:29.783 "crdt1": 0, 00:19:29.783 "crdt2": 0, 00:19:29.783 "crdt3": 0 00:19:29.783 } 00:19:29.783 }, 00:19:29.783 { 00:19:29.783 "method": "nvmf_create_transport", 00:19:29.783 "params": { 00:19:29.783 "trtype": "TCP", 00:19:29.783 "max_queue_depth": 128, 00:19:29.783 "max_io_qpairs_per_ctrlr": 127, 00:19:29.783 "in_capsule_data_size": 4096, 00:19:29.783 "max_io_size": 131072, 00:19:29.783 "io_unit_size": 131072, 00:19:29.783 "max_aq_depth": 128, 00:19:29.783 "num_shared_buffers": 511, 00:19:29.783 "buf_cache_size": 4294967295, 00:19:29.783 "dif_insert_or_strip": false, 00:19:29.783 "zcopy": false, 00:19:29.783 "c2h_success": false, 00:19:29.783 "sock_priority": 0, 00:19:29.783 "abort_timeout_sec": 1, 00:19:29.783 "ack_timeout": 0, 00:19:29.783 "data_wr_pool_size": 0 00:19:29.783 } 00:19:29.783 }, 00:19:29.783 { 00:19:29.783 "method": "nvmf_create_subsystem", 00:19:29.783 "params": { 00:19:29.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.783 "allow_any_host": false, 00:19:29.783 "serial_number": "00000000000000000000", 00:19:29.783 "model_number": "SPDK bdev Controller", 00:19:29.783 "max_namespaces": 32, 00:19:29.783 "min_cntlid": 1, 00:19:29.783 "max_cntlid": 65519, 00:19:29.783 "ana_reporting": false 00:19:29.783 } 00:19:29.783 }, 00:19:29.783 { 00:19:29.783 "method": "nvmf_subsystem_add_host", 00:19:29.783 "params": { 00:19:29.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.783 "host": "nqn.2016-06.io.spdk:host1", 00:19:29.783 "psk": "key0" 00:19:29.783 } 00:19:29.783 }, 00:19:29.783 { 00:19:29.783 "method": "nvmf_subsystem_add_ns", 00:19:29.783 "params": { 00:19:29.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.783 "namespace": { 00:19:29.783 "nsid": 1, 00:19:29.783 "bdev_name": "malloc0", 00:19:29.783 "nguid": "5402DF29B37D45AEB5AA39051FA67EB1", 00:19:29.783 "uuid": "5402df29-b37d-45ae-b5aa-39051fa67eb1", 00:19:29.783 "no_auto_visible": false 00:19:29.783 } 00:19:29.783 } 00:19:29.783 }, 00:19:29.783 { 00:19:29.783 "method": "nvmf_subsystem_add_listener", 00:19:29.783 "params": { 00:19:29.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.783 "listen_address": { 00:19:29.783 "trtype": "TCP", 00:19:29.783 "adrfam": "IPv4", 00:19:29.783 "traddr": "10.0.0.2", 00:19:29.783 "trsvcid": "4420" 00:19:29.783 }, 00:19:29.783 "secure_channel": false, 00:19:29.783 "sock_impl": "ssl" 00:19:29.783 } 00:19:29.783 } 00:19:29.783 ] 00:19:29.783 } 00:19:29.783 ] 00:19:29.783 }' 00:19:29.783 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:30.042 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # bperfcfg='{ 00:19:30.042 "subsystems": [ 00:19:30.042 { 00:19:30.042 "subsystem": "keyring", 00:19:30.042 "config": [ 00:19:30.042 { 00:19:30.042 "method": "keyring_file_add_key", 00:19:30.042 "params": { 00:19:30.042 "name": "key0", 00:19:30.042 "path": "/tmp/tmp.pMg9SCUkBu" 00:19:30.042 } 00:19:30.042 } 00:19:30.042 ] 00:19:30.042 }, 00:19:30.042 { 00:19:30.042 "subsystem": "iobuf", 00:19:30.042 "config": [ 00:19:30.042 { 00:19:30.042 "method": "iobuf_set_options", 00:19:30.042 "params": { 00:19:30.042 "small_pool_count": 8192, 00:19:30.042 "large_pool_count": 1024, 00:19:30.042 "small_bufsize": 8192, 00:19:30.042 "large_bufsize": 135168 00:19:30.042 } 00:19:30.042 } 00:19:30.042 ] 00:19:30.042 }, 00:19:30.042 { 00:19:30.042 "subsystem": "sock", 00:19:30.042 "config": [ 00:19:30.042 { 00:19:30.042 "method": "sock_set_default_impl", 00:19:30.042 "params": { 00:19:30.042 "impl_name": "uring" 00:19:30.042 } 00:19:30.042 }, 00:19:30.042 { 00:19:30.042 "method": "sock_impl_set_options", 00:19:30.042 "params": { 00:19:30.042 "impl_name": "ssl", 00:19:30.042 "recv_buf_size": 4096, 00:19:30.042 "send_buf_size": 4096, 00:19:30.042 "enable_recv_pipe": true, 00:19:30.042 "enable_quickack": false, 00:19:30.042 "enable_placement_id": 0, 00:19:30.042 "enable_zerocopy_send_server": true, 00:19:30.042 "enable_zerocopy_send_client": false, 00:19:30.042 "zerocopy_threshold": 0, 00:19:30.042 "tls_version": 0, 00:19:30.042 "enable_ktls": false 00:19:30.042 } 00:19:30.042 }, 00:19:30.042 { 00:19:30.042 "method": "sock_impl_set_options", 00:19:30.042 "params": { 00:19:30.042 "impl_name": "posix", 00:19:30.042 "recv_buf_size": 2097152, 00:19:30.042 "send_buf_size": 2097152, 00:19:30.042 "enable_recv_pipe": true, 00:19:30.042 "enable_quickack": false, 00:19:30.042 "enable_placement_id": 0, 00:19:30.042 "enable_zerocopy_send_server": true, 00:19:30.042 "enable_zerocopy_send_client": false, 00:19:30.042 "zerocopy_threshold": 0, 00:19:30.042 "tls_version": 0, 00:19:30.042 "enable_ktls": false 00:19:30.042 } 00:19:30.042 }, 00:19:30.042 { 00:19:30.042 "method": "sock_impl_set_options", 00:19:30.042 "params": { 00:19:30.042 "impl_name": "uring", 00:19:30.042 "recv_buf_size": 2097152, 00:19:30.042 "send_buf_size": 2097152, 00:19:30.042 "enable_recv_pipe": true, 00:19:30.042 "enable_quickack": false, 00:19:30.043 "enable_placement_id": 0, 00:19:30.043 "enable_zerocopy_send_server": false, 00:19:30.043 "enable_zerocopy_send_client": false, 00:19:30.043 "zerocopy_threshold": 0, 00:19:30.043 "tls_version": 0, 00:19:30.043 "enable_ktls": false 00:19:30.043 } 00:19:30.043 } 00:19:30.043 ] 00:19:30.043 }, 00:19:30.043 { 00:19:30.043 "subsystem": "vmd", 00:19:30.043 "config": [] 00:19:30.043 }, 00:19:30.043 { 00:19:30.043 "subsystem": "accel", 00:19:30.043 "config": [ 00:19:30.043 { 00:19:30.043 "method": "accel_set_options", 00:19:30.043 "params": { 00:19:30.043 "small_cache_size": 128, 00:19:30.043 "large_cache_size": 16, 00:19:30.043 "task_count": 2048, 00:19:30.043 "sequence_count": 2048, 00:19:30.043 "buf_count": 2048 00:19:30.043 } 00:19:30.043 } 00:19:30.043 ] 00:19:30.043 }, 00:19:30.043 { 00:19:30.043 "subsystem": "bdev", 00:19:30.043 "config": [ 00:19:30.043 { 00:19:30.043 "method": "bdev_set_options", 00:19:30.043 "params": { 00:19:30.043 "bdev_io_pool_size": 65535, 00:19:30.043 "bdev_io_cache_size": 256, 00:19:30.043 "bdev_auto_examine": true, 00:19:30.043 "iobuf_small_cache_size": 128, 00:19:30.043 "iobuf_large_cache_size": 16 00:19:30.043 } 00:19:30.043 }, 00:19:30.043 { 00:19:30.043 "method": "bdev_raid_set_options", 00:19:30.043 "params": { 00:19:30.043 "process_window_size_kb": 1024, 00:19:30.043 "process_max_bandwidth_mb_sec": 0 00:19:30.043 } 00:19:30.043 }, 00:19:30.043 { 00:19:30.043 "method": "bdev_iscsi_set_options", 00:19:30.043 "params": { 00:19:30.043 "timeout_sec": 30 00:19:30.043 } 00:19:30.043 }, 00:19:30.043 { 00:19:30.043 "method": "bdev_nvme_set_options", 00:19:30.043 "params": { 00:19:30.043 "action_on_timeout": "none", 00:19:30.043 "timeout_us": 0, 00:19:30.043 "timeout_admin_us": 0, 00:19:30.043 "keep_alive_timeout_ms": 10000, 00:19:30.043 "arbitration_burst": 0, 00:19:30.043 "low_priority_weight": 0, 00:19:30.043 "medium_priority_weight": 0, 00:19:30.043 "high_priority_weight": 0, 00:19:30.043 "nvme_adminq_poll_period_us": 10000, 00:19:30.043 "nvme_ioq_poll_period_us": 0, 00:19:30.043 "io_queue_requests": 512, 00:19:30.043 "delay_cmd_submit": true, 00:19:30.043 "transport_retry_count": 4, 00:19:30.043 "bdev_retry_count": 3, 00:19:30.043 "transport_ack_timeout": 0, 00:19:30.043 "ctrlr_loss_timeout_sec": 0, 00:19:30.043 "reconnect_delay_sec": 0, 00:19:30.043 "fast_io_fail_timeout_sec": 0, 00:19:30.043 "disable_auto_failback": false, 00:19:30.043 "generate_uuids": false, 00:19:30.043 "transport_tos": 0, 00:19:30.043 "nvme_error_stat": false, 00:19:30.043 "rdma_srq_size": 0, 00:19:30.043 "io_path_stat": false, 00:19:30.043 "allow_accel_sequence": false, 00:19:30.043 "rdma_max_cq_size": 0, 00:19:30.043 "rdma_cm_event_timeout_ms": 0, 00:19:30.043 "dhchap_digests": [ 00:19:30.043 "sha256", 00:19:30.043 "sha384", 00:19:30.043 "sha512" 00:19:30.043 ], 00:19:30.043 "dhchap_dhgroups": [ 00:19:30.043 "null", 00:19:30.043 "ffdhe2048", 00:19:30.043 "ffdhe3072", 00:19:30.043 "ffdhe4096", 00:19:30.043 "ffdhe6144", 00:19:30.043 "ffdhe8192" 00:19:30.043 ] 00:19:30.043 } 00:19:30.043 }, 00:19:30.043 { 00:19:30.043 "method": "bdev_nvme_attach_controller", 00:19:30.043 "params": { 00:19:30.043 "name": "nvme0", 00:19:30.043 "trtype": "TCP", 00:19:30.043 "adrfam": "IPv4", 00:19:30.043 "traddr": "10.0.0.2", 00:19:30.043 "trsvcid": "4420", 00:19:30.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.043 "prchk_reftag": false, 00:19:30.043 "prchk_guard": false, 00:19:30.043 "ctrlr_loss_timeout_sec": 0, 00:19:30.043 "reconnect_delay_sec": 0, 00:19:30.043 "fast_io_fail_timeout_sec": 0, 00:19:30.043 "psk": "key0", 00:19:30.043 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.043 "hdgst": false, 00:19:30.043 "ddgst": false 00:19:30.043 } 00:19:30.043 }, 00:19:30.043 { 00:19:30.043 "method": "bdev_nvme_set_hotplug", 00:19:30.043 "params": { 00:19:30.043 "period_us": 100000, 00:19:30.043 "enable": false 00:19:30.043 } 00:19:30.043 }, 00:19:30.043 { 00:19:30.043 "method": "bdev_enable_histogram", 00:19:30.043 "params": { 00:19:30.043 "name": "nvme0n1", 00:19:30.043 "enable": true 00:19:30.043 } 00:19:30.043 }, 00:19:30.043 { 00:19:30.043 "method": "bdev_wait_for_examine" 00:19:30.043 } 00:19:30.043 ] 00:19:30.043 }, 00:19:30.043 { 00:19:30.043 "subsystem": "nbd", 00:19:30.043 "config": [] 00:19:30.043 } 00:19:30.043 ] 00:19:30.043 }' 00:19:30.043 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # killprocess 72557 00:19:30.043 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72557 ']' 00:19:30.043 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72557 00:19:30.043 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:30.043 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.043 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72557 00:19:30.302 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:30.302 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:30.302 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72557' 00:19:30.302 killing process with pid 72557 00:19:30.302 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72557 00:19:30.302 Received shutdown signal, test time was about 1.000000 seconds 00:19:30.302 00:19:30.302 Latency(us) 00:19:30.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.302 =================================================================================================================== 00:19:30.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.302 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72557 00:19:30.302 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # killprocess 72525 00:19:30.302 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72525 ']' 00:19:30.302 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72525 00:19:30.302 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:30.303 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.303 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72525 00:19:30.303 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:30.303 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:30.303 killing process with pid 72525 00:19:30.303 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72525' 00:19:30.303 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72525 00:19:30.303 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72525 00:19:30.562 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # nvmfappstart -c /dev/fd/62 00:19:30.562 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:19:30.562 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:30.562 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.562 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # echo '{ 00:19:30.562 "subsystems": [ 00:19:30.562 { 00:19:30.562 "subsystem": "keyring", 00:19:30.562 "config": [ 00:19:30.562 { 00:19:30.562 "method": "keyring_file_add_key", 00:19:30.562 "params": { 00:19:30.562 "name": "key0", 00:19:30.562 "path": "/tmp/tmp.pMg9SCUkBu" 00:19:30.562 } 00:19:30.562 } 00:19:30.562 ] 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "subsystem": "iobuf", 00:19:30.562 "config": [ 00:19:30.562 { 00:19:30.562 "method": "iobuf_set_options", 00:19:30.562 "params": { 00:19:30.562 "small_pool_count": 8192, 00:19:30.562 "large_pool_count": 1024, 00:19:30.562 "small_bufsize": 8192, 00:19:30.562 "large_bufsize": 135168 00:19:30.562 } 00:19:30.562 } 00:19:30.562 ] 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "subsystem": "sock", 00:19:30.562 "config": [ 00:19:30.562 { 00:19:30.562 "method": "sock_set_default_impl", 00:19:30.562 "params": { 00:19:30.562 "impl_name": "uring" 00:19:30.562 } 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "method": "sock_impl_set_options", 00:19:30.562 "params": { 00:19:30.562 "impl_name": "ssl", 00:19:30.562 "recv_buf_size": 4096, 00:19:30.562 "send_buf_size": 4096, 00:19:30.562 "enable_recv_pipe": true, 00:19:30.562 "enable_quickack": false, 00:19:30.562 "enable_placement_id": 0, 00:19:30.562 "enable_zerocopy_send_server": true, 00:19:30.562 "enable_zerocopy_send_client": false, 00:19:30.562 "zerocopy_threshold": 0, 00:19:30.562 "tls_version": 0, 00:19:30.562 "enable_ktls": false 00:19:30.562 } 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "method": "sock_impl_set_options", 00:19:30.562 "params": { 00:19:30.562 "impl_name": "posix", 00:19:30.562 "recv_buf_size": 2097152, 00:19:30.562 "send_buf_size": 2097152, 00:19:30.562 "enable_recv_pipe": true, 00:19:30.562 "enable_quickack": false, 00:19:30.562 "enable_placement_id": 0, 00:19:30.562 "enable_zerocopy_send_server": true, 00:19:30.562 "enable_zerocopy_send_client": false, 00:19:30.562 "zerocopy_threshold": 0, 00:19:30.562 "tls_version": 0, 00:19:30.562 "enable_ktls": false 00:19:30.562 } 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "method": "sock_impl_set_options", 00:19:30.562 "params": { 00:19:30.562 "impl_name": "uring", 00:19:30.562 "recv_buf_size": 2097152, 00:19:30.562 "send_buf_size": 2097152, 00:19:30.562 "enable_recv_pipe": true, 00:19:30.562 "enable_quickack": false, 00:19:30.562 "enable_placement_id": 0, 00:19:30.562 "enable_zerocopy_send_server": false, 00:19:30.562 "enable_zerocopy_send_client": false, 00:19:30.562 "zerocopy_threshold": 0, 00:19:30.562 "tls_version": 0, 00:19:30.562 "enable_ktls": false 00:19:30.562 } 00:19:30.562 } 00:19:30.562 ] 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "subsystem": "vmd", 00:19:30.562 "config": [] 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "subsystem": "accel", 00:19:30.562 "config": [ 00:19:30.562 { 00:19:30.562 "method": "accel_set_options", 00:19:30.562 "params": { 00:19:30.562 "small_cache_size": 128, 00:19:30.562 "large_cache_size": 16, 00:19:30.562 "task_count": 2048, 00:19:30.562 "sequence_count": 2048, 00:19:30.562 "buf_count": 2048 00:19:30.562 } 00:19:30.562 } 00:19:30.562 ] 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "subsystem": "bdev", 00:19:30.562 "config": [ 00:19:30.562 { 00:19:30.562 "method": "bdev_set_options", 00:19:30.562 "params": { 00:19:30.562 "bdev_io_pool_size": 65535, 00:19:30.562 "bdev_io_cache_size": 256, 00:19:30.562 "bdev_auto_examine": true, 00:19:30.562 "iobuf_small_cache_size": 128, 00:19:30.562 "iobuf_large_cache_size": 16 00:19:30.562 } 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "method": "bdev_raid_set_options", 00:19:30.562 "params": { 00:19:30.562 "process_window_size_kb": 1024, 00:19:30.562 "process_max_bandwidth_mb_sec": 0 00:19:30.562 } 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "method": "bdev_iscsi_set_options", 00:19:30.562 "params": { 00:19:30.562 "timeout_sec": 30 00:19:30.562 } 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "method": "bdev_nvme_set_options", 00:19:30.562 "params": { 00:19:30.562 "action_on_timeout": "none", 00:19:30.562 "timeout_us": 0, 00:19:30.562 "timeout_admin_us": 0, 00:19:30.562 "keep_alive_timeout_ms": 10000, 00:19:30.562 "arbitration_burst": 0, 00:19:30.562 "low_priority_weight": 0, 00:19:30.562 "medium_priority_weight": 0, 00:19:30.562 "high_priority_weight": 0, 00:19:30.562 "nvme_adminq_poll_period_us": 10000, 00:19:30.562 "nvme_ioq_poll_period_us": 0, 00:19:30.562 "io_queue_requests": 0, 00:19:30.562 "delay_cmd_submit": true, 00:19:30.562 "transport_retry_count": 4, 00:19:30.562 "bdev_retry_count": 3, 00:19:30.562 "transport_ack_timeout": 0, 00:19:30.562 "ctrlr_loss_timeout_sec": 0, 00:19:30.562 "reconnect_delay_sec": 0, 00:19:30.562 "fast_io_fail_timeout_sec": 0, 00:19:30.562 "disable_auto_failback": false, 00:19:30.562 "generate_uuids": false, 00:19:30.562 "transport_tos": 0, 00:19:30.562 "nvme_error_stat": false, 00:19:30.562 "rdma_srq_size": 0, 00:19:30.562 "io_path_stat": false, 00:19:30.562 "allow_accel_sequence": false, 00:19:30.562 "rdma_max_cq_size": 0, 00:19:30.562 "rdma_cm_event_timeout_ms": 0, 00:19:30.562 "dhchap_digests": [ 00:19:30.562 "sha256", 00:19:30.562 "sha384", 00:19:30.562 "sha512" 00:19:30.562 ], 00:19:30.562 "dhchap_dhgroups": [ 00:19:30.562 "null", 00:19:30.562 "ffdhe2048", 00:19:30.562 "ffdhe3072", 00:19:30.562 "ffdhe4096", 00:19:30.562 "ffdhe6144", 00:19:30.562 "ffdhe8192" 00:19:30.562 ] 00:19:30.562 } 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "method": "bdev_nvme_set_hotplug", 00:19:30.562 "params": { 00:19:30.562 "period_us": 100000, 00:19:30.562 "enable": false 00:19:30.562 } 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "method": "bdev_malloc_create", 00:19:30.562 "params": { 00:19:30.562 "name": "malloc0", 00:19:30.562 "num_blocks": 8192, 00:19:30.562 "block_size": 4096, 00:19:30.562 "physical_block_size": 4096, 00:19:30.562 "uuid": "5402df29-b37d-45ae-b5aa-39051fa67eb1", 00:19:30.562 "optimal_io_boundary": 0, 00:19:30.562 "md_size": 0, 00:19:30.562 "dif_type": 0, 00:19:30.562 "dif_is_head_of_md": false, 00:19:30.562 "dif_pi_format": 0 00:19:30.562 } 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "method": "bdev_wait_for_examine" 00:19:30.562 } 00:19:30.562 ] 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "subsystem": "nbd", 00:19:30.562 "config": [] 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "subsystem": "scheduler", 00:19:30.562 "config": [ 00:19:30.562 { 00:19:30.562 "method": "framework_set_scheduler", 00:19:30.562 "params": { 00:19:30.562 "name": "static" 00:19:30.562 } 00:19:30.562 } 00:19:30.562 ] 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "subsystem": "nvmf", 00:19:30.562 "config": [ 00:19:30.562 { 00:19:30.562 "method": "nvmf_set_config", 00:19:30.562 "params": { 00:19:30.562 "discovery_filter": "match_any", 00:19:30.562 "admin_cmd_passthru": { 00:19:30.562 "identify_ctrlr": false 00:19:30.562 }, 00:19:30.562 "dhchap_digests": [ 00:19:30.562 "sha256", 00:19:30.562 "sha384", 00:19:30.562 "sha512" 00:19:30.562 ], 00:19:30.562 "dhchap_dhgroups": [ 00:19:30.562 "null", 00:19:30.562 "ffdhe2048", 00:19:30.562 "ffdhe3072", 00:19:30.562 "ffdhe4096", 00:19:30.562 "ffdhe6144", 00:19:30.562 "ffdhe8192" 00:19:30.562 ] 00:19:30.562 } 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "method": "nvmf_set_max_subsystems", 00:19:30.562 "params": { 00:19:30.562 "max_subsystems": 1024 00:19:30.562 } 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "method": "nvmf_set_crdt", 00:19:30.562 "params": { 00:19:30.562 "crdt1": 0, 00:19:30.562 "crdt2": 0, 00:19:30.562 "crdt3": 0 00:19:30.562 } 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "method": "nvmf_create_transport", 00:19:30.562 "params": { 00:19:30.562 "trtype": "TCP", 00:19:30.562 "max_queue_depth": 128, 00:19:30.562 "max_io_qpairs_per_ctrlr": 127, 00:19:30.562 "in_capsule_data_size": 4096, 00:19:30.562 "max_io_size": 131072, 00:19:30.562 "io_unit_size": 131072, 00:19:30.562 "max_aq_depth": 128, 00:19:30.562 "num_shared_buffers": 511, 00:19:30.562 "buf_cache_size": 4294967295, 00:19:30.562 "dif_insert_or_strip": false, 00:19:30.562 "zcopy": false, 00:19:30.562 "c2h_success": false, 00:19:30.562 "sock_priority": 0, 00:19:30.562 "abort_timeout_sec": 1, 00:19:30.562 "ack_timeout": 0, 00:19:30.562 "data_wr_pool_size": 0 00:19:30.562 } 00:19:30.562 }, 00:19:30.562 { 00:19:30.562 "method": "nvmf_create_subsystem", 00:19:30.563 "params": { 00:19:30.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.563 "allow_any_host": false, 00:19:30.563 "serial_number": "00000000000000000000", 00:19:30.563 "model_number": "SPDK bdev Controller", 00:19:30.563 "max_namespaces": 32, 00:19:30.563 "min_cntlid": 1, 00:19:30.563 "max_cntlid": 65519, 00:19:30.563 "ana_reporting": false 00:19:30.563 } 00:19:30.563 }, 00:19:30.563 { 00:19:30.563 "method": "nvmf_subsystem_add_host", 00:19:30.563 "params": { 00:19:30.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.563 "host": "nqn.2016-06.io.spdk:host1", 00:19:30.563 "psk": "key0" 00:19:30.563 } 00:19:30.563 }, 00:19:30.563 { 00:19:30.563 "method": "nvmf_subsystem_add_ns", 00:19:30.563 "params": { 00:19:30.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.563 "namespace": { 00:19:30.563 "nsid": 1, 00:19:30.563 "bdev_name": "malloc0", 00:19:30.563 "nguid": "5402DF29B37D45AEB5AA39051FA67EB1", 00:19:30.563 "uuid": "5402df29-b37d-45ae-b5aa-39051fa67eb1", 00:19:30.563 "no_auto_visible": false 00:19:30.563 } 00:19:30.563 } 00:19:30.563 }, 00:19:30.563 { 00:19:30.563 "method": "nvmf_subsystem_add_listener", 00:19:30.563 "params": { 00:19:30.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.563 "listen_address": { 00:19:30.563 "trtype": "TCP", 00:19:30.563 "adrfam": "IPv4", 00:19:30.563 "traddr": "10.0.0.2", 00:19:30.563 "trsvcid": "4420" 00:19:30.563 }, 00:19:30.563 "secure_channel": false, 00:19:30.563 "sock_impl": "ssl" 00:19:30.563 } 00:19:30.563 } 00:19:30.563 ] 00:19:30.563 } 00:19:30.563 ] 00:19:30.563 }' 00:19:30.563 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@324 -- # nvmfpid=72623 00:19:30.563 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # waitforlisten 72623 00:19:30.563 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:30.563 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72623 ']' 00:19:30.563 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.563 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.563 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.563 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.563 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.563 [2024-09-27 13:24:32.379601] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:30.563 [2024-09-27 13:24:32.379719] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.821 [2024-09-27 13:24:32.516617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.821 [2024-09-27 13:24:32.589830] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.821 [2024-09-27 13:24:32.590170] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.821 [2024-09-27 13:24:32.590329] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.821 [2024-09-27 13:24:32.590485] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.821 [2024-09-27 13:24:32.590634] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.821 [2024-09-27 13:24:32.590908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.079 [2024-09-27 13:24:32.736404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:31.079 [2024-09-27 13:24:32.796847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.079 [2024-09-27 13:24:32.832496] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:31.079 [2024-09-27 13:24:32.832993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.646 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.646 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:31.646 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:19:31.646 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:31.646 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.647 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.647 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # bdevperf_pid=72655 00:19:31.647 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # waitforlisten 72655 /var/tmp/bdevperf.sock 00:19:31.647 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72655 ']' 00:19:31.647 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.647 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.647 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.647 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.647 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:31.647 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.647 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:19:31.647 "subsystems": [ 00:19:31.647 { 00:19:31.647 "subsystem": "keyring", 00:19:31.647 "config": [ 00:19:31.647 { 00:19:31.647 "method": "keyring_file_add_key", 00:19:31.647 "params": { 00:19:31.647 "name": "key0", 00:19:31.647 "path": "/tmp/tmp.pMg9SCUkBu" 00:19:31.647 } 00:19:31.647 } 00:19:31.647 ] 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "subsystem": "iobuf", 00:19:31.647 "config": [ 00:19:31.647 { 00:19:31.647 "method": "iobuf_set_options", 00:19:31.647 "params": { 00:19:31.647 "small_pool_count": 8192, 00:19:31.647 "large_pool_count": 1024, 00:19:31.647 "small_bufsize": 8192, 00:19:31.647 "large_bufsize": 135168 00:19:31.647 } 00:19:31.647 } 00:19:31.647 ] 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "subsystem": "sock", 00:19:31.647 "config": [ 00:19:31.647 { 00:19:31.647 "method": "sock_set_default_impl", 00:19:31.647 "params": { 00:19:31.647 "impl_name": "uring" 00:19:31.647 } 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "method": "sock_impl_set_options", 00:19:31.647 "params": { 00:19:31.647 "impl_name": "ssl", 00:19:31.647 "recv_buf_size": 4096, 00:19:31.647 "send_buf_size": 4096, 00:19:31.647 "enable_recv_pipe": true, 00:19:31.647 "enable_quickack": false, 00:19:31.647 "enable_placement_id": 0, 00:19:31.647 "enable_zerocopy_send_server": true, 00:19:31.647 "enable_zerocopy_send_client": false, 00:19:31.647 "zerocopy_threshold": 0, 00:19:31.647 "tls_version": 0, 00:19:31.647 "enable_ktls": false 00:19:31.647 } 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "method": "sock_impl_set_options", 00:19:31.647 "params": { 00:19:31.647 "impl_name": "posix", 00:19:31.647 "recv_buf_size": 2097152, 00:19:31.647 "send_buf_size": 2097152, 00:19:31.647 "enable_recv_pipe": true, 00:19:31.647 "enable_quickack": false, 00:19:31.647 "enable_placement_id": 0, 00:19:31.647 "enable_zerocopy_send_server": true, 00:19:31.647 "enable_zerocopy_send_client": false, 00:19:31.647 "zerocopy_threshold": 0, 00:19:31.647 "tls_version": 0, 00:19:31.647 "enable_ktls": false 00:19:31.647 } 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "method": "sock_impl_set_options", 00:19:31.647 "params": { 00:19:31.647 "impl_name": "uring", 00:19:31.647 "recv_buf_size": 2097152, 00:19:31.647 "send_buf_size": 2097152, 00:19:31.647 "enable_recv_pipe": true, 00:19:31.647 "enable_quickack": false, 00:19:31.647 "enable_placement_id": 0, 00:19:31.647 "enable_zerocopy_send_server": false, 00:19:31.647 "enable_zerocopy_send_client": false, 00:19:31.647 "zerocopy_threshold": 0, 00:19:31.647 "tls_version": 0, 00:19:31.647 "enable_ktls": false 00:19:31.647 } 00:19:31.647 } 00:19:31.647 ] 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "subsystem": "vmd", 00:19:31.647 "config": [] 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "subsystem": "accel", 00:19:31.647 "config": [ 00:19:31.647 { 00:19:31.647 "method": "accel_set_options", 00:19:31.647 "params": { 00:19:31.647 "small_cache_size": 128, 00:19:31.647 "large_cache_size": 16, 00:19:31.647 "task_count": 2048, 00:19:31.647 "sequence_count": 2048, 00:19:31.647 "buf_count": 2048 00:19:31.647 } 00:19:31.647 } 00:19:31.647 ] 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "subsystem": "bdev", 00:19:31.647 "config": [ 00:19:31.647 { 00:19:31.647 "method": "bdev_set_options", 00:19:31.647 "params": { 00:19:31.647 "bdev_io_pool_size": 65535, 00:19:31.647 "bdev_io_cache_size": 256, 00:19:31.647 "bdev_auto_examine": true, 00:19:31.647 "iobuf_small_cache_size": 128, 00:19:31.647 "iobuf_large_cache_size": 16 00:19:31.647 } 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "method": "bdev_raid_set_options", 00:19:31.647 "params": { 00:19:31.647 "process_window_size_kb": 1024, 00:19:31.647 "process_max_bandwidth_mb_sec": 0 00:19:31.647 } 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "method": "bdev_iscsi_set_options", 00:19:31.647 "params": { 00:19:31.647 "timeout_sec": 30 00:19:31.647 } 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "method": "bdev_nvme_set_options", 00:19:31.647 "params": { 00:19:31.647 "action_on_timeout": "none", 00:19:31.647 "timeout_us": 0, 00:19:31.647 "timeout_admin_us": 0, 00:19:31.647 "keep_alive_timeout_ms": 10000, 00:19:31.647 "arbitration_burst": 0, 00:19:31.647 "low_priority_weight": 0, 00:19:31.647 "medium_priority_weight": 0, 00:19:31.647 "high_priority_weight": 0, 00:19:31.647 "nvme_adminq_poll_period_us": 10000, 00:19:31.647 "nvme_ioq_poll_period_us": 0, 00:19:31.647 "io_queue_requests": 512, 00:19:31.647 "delay_cmd_submit": true, 00:19:31.647 "transport_retry_count": 4, 00:19:31.647 "bdev_retry_count": 3, 00:19:31.647 "transport_ack_timeout": 0, 00:19:31.647 "ctrlr_loss_timeout_sec": 0, 00:19:31.647 "reconnect_delay_sec": 0, 00:19:31.647 "fast_io_fail_timeout_sec": 0, 00:19:31.647 "disable_auto_failback": false, 00:19:31.647 "generate_uuids": false, 00:19:31.647 "transport_tos": 0, 00:19:31.647 "nvme_error_stat": false, 00:19:31.647 "rdma_srq_size": 0, 00:19:31.647 "io_path_stat": false, 00:19:31.647 "allow_accel_sequence": false, 00:19:31.647 "rdma_max_cq_size": 0, 00:19:31.647 "rdma_cm_event_timeout_ms": 0, 00:19:31.647 "dhchap_digests": [ 00:19:31.647 "sha256", 00:19:31.647 "sha384", 00:19:31.647 "sha512" 00:19:31.647 ], 00:19:31.647 "dhchap_dhgroups": [ 00:19:31.647 "null", 00:19:31.647 "ffdhe2048", 00:19:31.647 "ffdhe3072", 00:19:31.647 "ffdhe4096", 00:19:31.647 "ffdhe6144", 00:19:31.647 "ffdhe8192" 00:19:31.647 ] 00:19:31.647 } 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "method": "bdev_nvme_attach_controller", 00:19:31.647 "params": { 00:19:31.647 "name": "nvme0", 00:19:31.647 "trtype": "TCP", 00:19:31.647 "adrfam": "IPv4", 00:19:31.647 "traddr": "10.0.0.2", 00:19:31.647 "trsvcid": "4420", 00:19:31.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.647 "prchk_reftag": false, 00:19:31.647 "prchk_guard": false, 00:19:31.647 "ctrlr_loss_timeout_sec": 0, 00:19:31.647 "reconnect_delay_sec": 0, 00:19:31.647 "fast_io_fail_timeout_sec": 0, 00:19:31.647 "psk": "key0", 00:19:31.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.647 "hdgst": false, 00:19:31.647 "ddgst": false 00:19:31.647 } 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "method": "bdev_nvme_set_hotplug", 00:19:31.647 "params": { 00:19:31.647 "period_us": 100000, 00:19:31.647 "enable": false 00:19:31.647 } 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "method": "bdev_enable_histogram", 00:19:31.647 "params": { 00:19:31.647 "name": "nvme0n1", 00:19:31.647 "enable": true 00:19:31.647 } 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "method": "bdev_wait_for_examine" 00:19:31.647 } 00:19:31.647 ] 00:19:31.647 }, 00:19:31.647 { 00:19:31.647 "subsystem": "nbd", 00:19:31.647 "config": [] 00:19:31.647 } 00:19:31.647 ] 00:19:31.647 }' 00:19:31.906 [2024-09-27 13:24:33.522255] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:31.906 [2024-09-27 13:24:33.523113] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72655 ] 00:19:31.906 [2024-09-27 13:24:33.664660] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.165 [2024-09-27 13:24:33.754423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.165 [2024-09-27 13:24:33.875270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.165 [2024-09-27 13:24:33.914657] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.100 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:33.100 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:33.100 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.100 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # jq -r '.[].name' 00:19:33.358 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.358 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:33.358 Running I/O for 1 seconds... 00:19:34.581 3269.00 IOPS, 12.77 MiB/s 00:19:34.581 Latency(us) 00:19:34.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.581 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:34.581 Verification LBA range: start 0x0 length 0x2000 00:19:34.581 nvme0n1 : 1.02 3327.33 13.00 0.00 0.00 38017.27 7298.33 38130.04 00:19:34.581 =================================================================================================================== 00:19:34.581 Total : 3327.33 13.00 0.00 0.00 38017.27 7298.33 38130.04 00:19:34.581 { 00:19:34.581 "results": [ 00:19:34.581 { 00:19:34.581 "job": "nvme0n1", 00:19:34.581 "core_mask": "0x2", 00:19:34.581 "workload": "verify", 00:19:34.581 "status": "finished", 00:19:34.581 "verify_range": { 00:19:34.581 "start": 0, 00:19:34.581 "length": 8192 00:19:34.581 }, 00:19:34.581 "queue_depth": 128, 00:19:34.581 "io_size": 4096, 00:19:34.581 "runtime": 1.021238, 00:19:34.581 "iops": 3327.334078833729, 00:19:34.581 "mibps": 12.997398745444254, 00:19:34.581 "io_failed": 0, 00:19:34.581 "io_timeout": 0, 00:19:34.581 "avg_latency_us": 38017.2665075713, 00:19:34.581 "min_latency_us": 7298.327272727272, 00:19:34.581 "max_latency_us": 38130.03636363636 00:19:34.581 } 00:19:34.581 ], 00:19:34.581 "core_count": 1 00:19:34.581 } 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # trap - SIGINT SIGTERM EXIT 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # cleanup 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:34.581 nvmf_trace.0 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72655 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72655 ']' 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72655 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72655 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:34.581 killing process with pid 72655 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:34.581 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72655' 00:19:34.582 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72655 00:19:34.582 Received shutdown signal, test time was about 1.000000 seconds 00:19:34.582 00:19:34.582 Latency(us) 00:19:34.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.582 =================================================================================================================== 00:19:34.582 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:34.582 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72655 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # nvmfcleanup 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@99 -- # sync 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@102 -- # set +e 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:34.840 rmmod nvme_tcp 00:19:34.840 rmmod nvme_fabrics 00:19:34.840 rmmod nvme_keyring 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@106 -- # set -e 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@107 -- # return 0 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # '[' -n 72623 ']' 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@333 -- # killprocess 72623 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72623 ']' 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72623 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72623 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:34.840 killing process with pid 72623 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72623' 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72623 00:19:34.840 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72623 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # nvmf_fini 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@264 -- # local dev 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:19:35.099 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # continue 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # continue 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # _dev=0 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # dev_map=() 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@284 -- # iptr 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@538 -- # iptables-save 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@538 -- # iptables-restore 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.VI4GnECQVx /tmp/tmp.Th4M1tgnyN /tmp/tmp.pMg9SCUkBu 00:19:35.358 ************************************ 00:19:35.358 END TEST nvmf_tls 00:19:35.358 ************************************ 00:19:35.358 00:19:35.358 real 1m30.046s 00:19:35.358 user 2m31.502s 00:19:35.358 sys 0m26.723s 00:19:35.358 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:35.359 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.359 13:24:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:35.359 13:24:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:35.359 13:24:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:35.359 13:24:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:35.359 ************************************ 00:19:35.359 START TEST nvmf_fips 00:19:35.359 ************************************ 00:19:35.359 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:35.359 * Looking for test storage... 00:19:35.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:35.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.359 --rc genhtml_branch_coverage=1 00:19:35.359 --rc genhtml_function_coverage=1 00:19:35.359 --rc genhtml_legend=1 00:19:35.359 --rc geninfo_all_blocks=1 00:19:35.359 --rc geninfo_unexecuted_blocks=1 00:19:35.359 00:19:35.359 ' 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:35.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.359 --rc genhtml_branch_coverage=1 00:19:35.359 --rc genhtml_function_coverage=1 00:19:35.359 --rc genhtml_legend=1 00:19:35.359 --rc geninfo_all_blocks=1 00:19:35.359 --rc geninfo_unexecuted_blocks=1 00:19:35.359 00:19:35.359 ' 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:35.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.359 --rc genhtml_branch_coverage=1 00:19:35.359 --rc genhtml_function_coverage=1 00:19:35.359 --rc genhtml_legend=1 00:19:35.359 --rc geninfo_all_blocks=1 00:19:35.359 --rc geninfo_unexecuted_blocks=1 00:19:35.359 00:19:35.359 ' 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:35.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.359 --rc genhtml_branch_coverage=1 00:19:35.359 --rc genhtml_function_coverage=1 00:19:35.359 --rc genhtml_legend=1 00:19:35.359 --rc geninfo_all_blocks=1 00:19:35.359 --rc geninfo_unexecuted_blocks=1 00:19:35.359 00:19:35.359 ' 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # : 0 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.359 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:35.360 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.360 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:35.619 Error setting digest 00:19:35.619 4002B0A74D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:35.619 4002B0A74D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # prepare_net_devs 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # local -g is_hw=no 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # remove_target_ns 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@276 -- # nvmf_veth_init 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@233 -- # create_target_ns 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@234 -- # create_main_bridge 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@114 -- # delete_main_bridge 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@130 -- # return 0 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:35.619 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@28 -- # local -g _dev 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@160 -- # set_up initiator0 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@160 -- # set_up target0 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set target0 up 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@161 -- # set_up target0_br 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_ns target0 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772161 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:19:35.620 10.0.0.1 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772162 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:19:35.620 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:35.880 10.0.0.2 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@75 -- # set_up initiator0 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@138 -- # set_up target0_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@160 -- # set_up initiator1 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@160 -- # set_up target1 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set target1 up 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@161 -- # set_up target1_br 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_ns target1 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772163 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:35.880 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:19:35.881 10.0.0.3 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772164 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:35.881 10.0.0.4 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@75 -- # set_up initiator1 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@138 -- # set_up target1_br 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo initiator0 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=initiator0 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:35.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:19:35.881 00:19:35.881 --- 10.0.0.1 ping statistics --- 00:19:35.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.881 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target0 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:35.881 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo target0 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=target0 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:35.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:19:35.882 00:19:35.882 --- 10.0.0.2 ping statistics --- 00:19:35.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.882 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo initiator1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=initiator1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:35.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:35.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:19:35.882 00:19:35.882 --- 10.0.0.3 ping statistics --- 00:19:35.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.882 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo target1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=target1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:19:35.882 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:19:36.140 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:36.140 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.102 ms 00:19:36.140 00:19:36.140 --- 10.0.0.4 ping statistics --- 00:19:36.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.140 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # return 0 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo initiator0 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=initiator0 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo initiator1 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=initiator1 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target0 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo target0 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=target0 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.140 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target1 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo target1 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=target1 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@324 -- # nvmfpid=72969 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # waitforlisten 72969 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 72969 ']' 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.141 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:36.141 [2024-09-27 13:24:37.879398] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:36.141 [2024-09-27 13:24:37.879497] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.399 [2024-09-27 13:24:38.014791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.399 [2024-09-27 13:24:38.078377] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.399 [2024-09-27 13:24:38.078433] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.399 [2024-09-27 13:24:38.078445] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.399 [2024-09-27 13:24:38.078454] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.399 [2024-09-27 13:24:38.078461] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.399 [2024-09-27 13:24:38.078488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.399 [2024-09-27 13:24:38.109622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.evW 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.evW 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.evW 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.evW 00:19:36.399 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:36.966 [2024-09-27 13:24:38.516489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.966 [2024-09-27 13:24:38.532473] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.966 [2024-09-27 13:24:38.532814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.966 malloc0 00:19:36.967 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.967 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73003 00:19:36.967 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.967 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73003 /var/tmp/bdevperf.sock 00:19:36.967 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73003 ']' 00:19:36.967 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.967 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.967 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.967 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.967 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:36.967 [2024-09-27 13:24:38.694907] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:36.967 [2024-09-27 13:24:38.695019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73003 ] 00:19:37.225 [2024-09-27 13:24:38.833910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.225 [2024-09-27 13:24:38.903618] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.225 [2024-09-27 13:24:38.937754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:38.160 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:38.160 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:38.160 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.evW 00:19:38.418 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:38.676 [2024-09-27 13:24:40.299297] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.676 TLSTESTn1 00:19:38.676 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:38.934 Running I/O for 10 seconds... 00:19:48.821 2616.00 IOPS, 10.22 MiB/s 3114.50 IOPS, 12.17 MiB/s 3297.67 IOPS, 12.88 MiB/s 3419.25 IOPS, 13.36 MiB/s 3494.40 IOPS, 13.65 MiB/s 3517.17 IOPS, 13.74 MiB/s 3530.00 IOPS, 13.79 MiB/s 3561.25 IOPS, 13.91 MiB/s 3549.44 IOPS, 13.87 MiB/s 3578.20 IOPS, 13.98 MiB/s 00:19:48.821 Latency(us) 00:19:48.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.821 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:48.821 Verification LBA range: start 0x0 length 0x2000 00:19:48.821 TLSTESTn1 : 10.02 3584.44 14.00 0.00 0.00 35645.31 6345.08 36938.47 00:19:48.821 =================================================================================================================== 00:19:48.821 Total : 3584.44 14.00 0.00 0.00 35645.31 6345.08 36938.47 00:19:48.821 { 00:19:48.821 "results": [ 00:19:48.821 { 00:19:48.821 "job": "TLSTESTn1", 00:19:48.821 "core_mask": "0x4", 00:19:48.821 "workload": "verify", 00:19:48.821 "status": "finished", 00:19:48.821 "verify_range": { 00:19:48.821 "start": 0, 00:19:48.821 "length": 8192 00:19:48.821 }, 00:19:48.821 "queue_depth": 128, 00:19:48.821 "io_size": 4096, 00:19:48.821 "runtime": 10.017474, 00:19:48.821 "iops": 3584.4365555628096, 00:19:48.821 "mibps": 14.001705295167225, 00:19:48.821 "io_failed": 0, 00:19:48.821 "io_timeout": 0, 00:19:48.821 "avg_latency_us": 35645.30868703747, 00:19:48.821 "min_latency_us": 6345.076363636364, 00:19:48.821 "max_latency_us": 36938.472727272725 00:19:48.821 } 00:19:48.821 ], 00:19:48.821 "core_count": 1 00:19:48.821 } 00:19:48.821 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:48.821 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:48.821 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:19:48.821 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:19:48.821 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:48.821 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:48.821 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:48.821 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:48.821 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:48.821 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:48.821 nvmf_trace.0 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73003 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73003 ']' 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73003 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73003 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:49.080 killing process with pid 73003 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73003' 00:19:49.080 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.080 00:19:49.080 Latency(us) 00:19:49.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.080 =================================================================================================================== 00:19:49.080 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73003 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73003 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # nvmfcleanup 00:19:49.080 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@99 -- # sync 00:19:49.338 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:49.338 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@102 -- # set +e 00:19:49.338 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:49.338 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:49.338 rmmod nvme_tcp 00:19:49.338 rmmod nvme_fabrics 00:19:49.338 rmmod nvme_keyring 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@106 -- # set -e 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@107 -- # return 0 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # '[' -n 72969 ']' 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@333 -- # killprocess 72969 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 72969 ']' 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 72969 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72969 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:49.338 killing process with pid 72969 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72969' 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 72969 00:19:49.338 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 72969 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # nvmf_fini 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@264 -- # local dev 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # continue 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # continue 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # _dev=0 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # dev_map=() 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@284 -- # iptr 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@538 -- # iptables-save 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@538 -- # iptables-restore 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.evW 00:19:49.597 00:19:49.597 real 0m14.410s 00:19:49.597 user 0m20.684s 00:19:49.597 sys 0m5.603s 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:49.597 ************************************ 00:19:49.597 END TEST nvmf_fips 00:19:49.597 ************************************ 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:49.597 13:24:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:49.856 ************************************ 00:19:49.856 START TEST nvmf_control_msg_list 00:19:49.856 ************************************ 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:49.856 * Looking for test storage... 00:19:49.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:49.856 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:49.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.857 --rc genhtml_branch_coverage=1 00:19:49.857 --rc genhtml_function_coverage=1 00:19:49.857 --rc genhtml_legend=1 00:19:49.857 --rc geninfo_all_blocks=1 00:19:49.857 --rc geninfo_unexecuted_blocks=1 00:19:49.857 00:19:49.857 ' 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:49.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.857 --rc genhtml_branch_coverage=1 00:19:49.857 --rc genhtml_function_coverage=1 00:19:49.857 --rc genhtml_legend=1 00:19:49.857 --rc geninfo_all_blocks=1 00:19:49.857 --rc geninfo_unexecuted_blocks=1 00:19:49.857 00:19:49.857 ' 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:49.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.857 --rc genhtml_branch_coverage=1 00:19:49.857 --rc genhtml_function_coverage=1 00:19:49.857 --rc genhtml_legend=1 00:19:49.857 --rc geninfo_all_blocks=1 00:19:49.857 --rc geninfo_unexecuted_blocks=1 00:19:49.857 00:19:49.857 ' 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:49.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.857 --rc genhtml_branch_coverage=1 00:19:49.857 --rc genhtml_function_coverage=1 00:19:49.857 --rc genhtml_legend=1 00:19:49.857 --rc geninfo_all_blocks=1 00:19:49.857 --rc geninfo_unexecuted_blocks=1 00:19:49.857 00:19:49.857 ' 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # : 0 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:49.857 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@292 -- # prepare_net_devs 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@254 -- # local -g is_hw=no 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # remove_target_ns 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@276 -- # nvmf_veth_init 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@233 -- # create_target_ns 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:19:49.857 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@234 -- # create_main_bridge 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@114 -- # delete_main_bridge 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@130 -- # return 0 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@28 -- # local -g _dev 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@160 -- # set_up initiator0 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@160 -- # set_up target0 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set target0 up 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@161 -- # set_up target0_br 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:19:49.858 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_ns target0 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772161 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:19:50.116 10.0.0.1 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772162 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:50.116 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:50.117 10.0.0.2 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@75 -- # set_up initiator0 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@138 -- # set_up target0_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@160 -- # set_up initiator1 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@160 -- # set_up target1 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set target1 up 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@161 -- # set_up target1_br 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_ns target1 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772163 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:19:50.117 10.0.0.3 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772164 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:50.117 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:50.118 10.0.0.4 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@75 -- # set_up initiator1 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:19:50.118 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@138 -- # set_up target1_br 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:50.377 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:50.377 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:19:50.377 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:19:50.377 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:50.377 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:50.377 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:50.377 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo initiator0 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=initiator0 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:50.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:19:50.378 00:19:50.378 --- 10.0.0.1 ping statistics --- 00:19:50.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.378 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target0 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo target0 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=target0 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:50.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:19:50.378 00:19:50.378 --- 10.0.0.2 ping statistics --- 00:19:50.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.378 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo initiator1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=initiator1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:50.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:50.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:19:50.378 00:19:50.378 --- 10.0.0.3 ping statistics --- 00:19:50.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.378 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo target1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=target1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:19:50.378 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:19:50.378 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:50.378 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.111 ms 00:19:50.378 00:19:50.378 --- 10.0.0.4 ping statistics --- 00:19:50.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.378 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # return 0 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo initiator0 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=initiator0 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo initiator1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=initiator1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target0 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo target0 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=target0 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo target1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=target1 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@324 -- # nvmfpid=73386 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # waitforlisten 73386 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 73386 ']' 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.379 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:50.379 [2024-09-27 13:24:52.209908] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:50.379 [2024-09-27 13:24:52.210003] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.637 [2024-09-27 13:24:52.347342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.637 [2024-09-27 13:24:52.407700] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.637 [2024-09-27 13:24:52.407756] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.637 [2024-09-27 13:24:52.407767] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.637 [2024-09-27 13:24:52.407775] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.637 [2024-09-27 13:24:52.407782] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.637 [2024-09-27 13:24:52.407812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.637 [2024-09-27 13:24:52.438141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:50.900 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.900 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:19:50.900 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:19:50.900 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:50.900 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:50.900 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.900 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:50.900 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:50.900 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:50.900 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.900 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:50.900 [2024-09-27 13:24:52.529319] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:50.901 Malloc0 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:50.901 [2024-09-27 13:24:52.578541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73412 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73413 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73414 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73412 00:19:50.901 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:51.161 [2024-09-27 13:24:52.752814] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:51.161 [2024-09-27 13:24:52.763202] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:51.161 [2024-09-27 13:24:52.763519] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:52.094 Initializing NVMe Controllers 00:19:52.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:52.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:52.094 Initialization complete. Launching workers. 00:19:52.094 ======================================================== 00:19:52.094 Latency(us) 00:19:52.094 Device Information : IOPS MiB/s Average min max 00:19:52.094 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3298.96 12.89 302.62 137.52 596.96 00:19:52.094 ======================================================== 00:19:52.094 Total : 3298.96 12.89 302.62 137.52 596.96 00:19:52.094 00:19:52.094 Initializing NVMe Controllers 00:19:52.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:52.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:52.094 Initialization complete. Launching workers. 00:19:52.094 ======================================================== 00:19:52.094 Latency(us) 00:19:52.094 Device Information : IOPS MiB/s Average min max 00:19:52.094 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3370.00 13.16 296.33 183.55 565.66 00:19:52.094 ======================================================== 00:19:52.094 Total : 3370.00 13.16 296.33 183.55 565.66 00:19:52.094 00:19:52.094 Initializing NVMe Controllers 00:19:52.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:52.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:52.095 Initialization complete. Launching workers. 00:19:52.095 ======================================================== 00:19:52.095 Latency(us) 00:19:52.095 Device Information : IOPS MiB/s Average min max 00:19:52.095 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3354.97 13.11 297.54 188.59 683.92 00:19:52.095 ======================================================== 00:19:52.095 Total : 3354.97 13.11 297.54 188.59 683.92 00:19:52.095 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73413 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73414 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # nvmfcleanup 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@99 -- # sync 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@102 -- # set +e 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:52.095 rmmod nvme_tcp 00:19:52.095 rmmod nvme_fabrics 00:19:52.095 rmmod nvme_keyring 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@106 -- # set -e 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@107 -- # return 0 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # '[' -n 73386 ']' 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@333 -- # killprocess 73386 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 73386 ']' 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 73386 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73386 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73386' 00:19:52.095 killing process with pid 73386 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 73386 00:19:52.095 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 73386 00:19:52.353 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:19:52.353 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # nvmf_fini 00:19:52.353 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@264 -- # local dev 00:19:52.353 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:52.353 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:52.353 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:52.353 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:52.353 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:52.353 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:52.354 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:19:52.354 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:19:52.354 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:52.354 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:19:52.354 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:19:52.354 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:52.354 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:52.354 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:19:52.354 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:19:52.354 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:19:52.354 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:52.354 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:19:52.354 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:19:52.611 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:52.611 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:52.611 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:19:52.611 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:19:52.611 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:19:52.611 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:52.611 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:19:52.611 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:19:52.611 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # continue 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # continue 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # _dev=0 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # dev_map=() 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@284 -- # iptr 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@538 -- # iptables-save 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@538 -- # iptables-restore 00:19:52.612 00:19:52.612 real 0m2.802s 00:19:52.612 user 0m4.681s 00:19:52.612 sys 0m1.322s 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.612 ************************************ 00:19:52.612 END TEST nvmf_control_msg_list 00:19:52.612 ************************************ 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:52.612 ************************************ 00:19:52.612 START TEST nvmf_wait_for_buf 00:19:52.612 ************************************ 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:52.612 * Looking for test storage... 00:19:52.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:52.612 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:52.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.871 --rc genhtml_branch_coverage=1 00:19:52.871 --rc genhtml_function_coverage=1 00:19:52.871 --rc genhtml_legend=1 00:19:52.871 --rc geninfo_all_blocks=1 00:19:52.871 --rc geninfo_unexecuted_blocks=1 00:19:52.871 00:19:52.871 ' 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:52.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.871 --rc genhtml_branch_coverage=1 00:19:52.871 --rc genhtml_function_coverage=1 00:19:52.871 --rc genhtml_legend=1 00:19:52.871 --rc geninfo_all_blocks=1 00:19:52.871 --rc geninfo_unexecuted_blocks=1 00:19:52.871 00:19:52.871 ' 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:52.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.871 --rc genhtml_branch_coverage=1 00:19:52.871 --rc genhtml_function_coverage=1 00:19:52.871 --rc genhtml_legend=1 00:19:52.871 --rc geninfo_all_blocks=1 00:19:52.871 --rc geninfo_unexecuted_blocks=1 00:19:52.871 00:19:52.871 ' 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:52.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.871 --rc genhtml_branch_coverage=1 00:19:52.871 --rc genhtml_function_coverage=1 00:19:52.871 --rc genhtml_legend=1 00:19:52.871 --rc geninfo_all_blocks=1 00:19:52.871 --rc geninfo_unexecuted_blocks=1 00:19:52.871 00:19:52.871 ' 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:52.871 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # : 0 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:52.872 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@292 -- # prepare_net_devs 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@254 -- # local -g is_hw=no 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # remove_target_ns 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@276 -- # nvmf_veth_init 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@233 -- # create_target_ns 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@234 -- # create_main_bridge 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@114 -- # delete_main_bridge 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@130 -- # return 0 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@28 -- # local -g _dev 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@160 -- # set_up initiator0 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:52.872 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@160 -- # set_up target0 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set target0 up 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@161 -- # set_up target0_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_ns target0 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772161 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:19:52.873 10.0.0.1 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772162 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:52.873 10.0.0.2 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@75 -- # set_up initiator0 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@138 -- # set_up target0_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@160 -- # set_up initiator1 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:19:52.873 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@160 -- # set_up target1 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set target1 up 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@161 -- # set_up target1_br 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:19:52.874 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_ns target1 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772163 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:19:53.133 10.0.0.3 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772164 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:19:53.133 10.0.0.4 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@75 -- # set_up initiator1 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@138 -- # set_up target1_br 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo initiator0 00:19:53.133 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=initiator0 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:53.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:19:53.134 00:19:53.134 --- 10.0.0.1 ping statistics --- 00:19:53.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.134 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target0 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo target0 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=target0 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:53.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.030 ms 00:19:53.134 00:19:53.134 --- 10.0.0.2 ping statistics --- 00:19:53.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.134 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo initiator1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=initiator1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:53.134 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:53.134 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:19:53.134 00:19:53.134 --- 10.0.0.3 ping statistics --- 00:19:53.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.134 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo target1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=target1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:19:53.134 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:53.134 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:19:53.134 00:19:53.134 --- 10.0.0.4 ping statistics --- 00:19:53.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.134 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # return 0 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:53.134 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo initiator0 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=initiator0 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo initiator1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=initiator1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target0 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo target0 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=target0 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo target1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=target1 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:19:53.135 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:19:53.393 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:53.393 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:19:53.393 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:53.393 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:53.393 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@324 -- # nvmfpid=73651 00:19:53.394 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # waitforlisten 73651 00:19:53.394 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 73651 ']' 00:19:53.394 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.394 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:53.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.394 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.394 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:53.394 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:53.394 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:53.394 [2024-09-27 13:24:55.039453] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:53.394 [2024-09-27 13:24:55.039561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.394 [2024-09-27 13:24:55.172736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.394 [2024-09-27 13:24:55.231385] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.394 [2024-09-27 13:24:55.231443] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.394 [2024-09-27 13:24:55.231455] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.394 [2024-09-27 13:24:55.231463] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.394 [2024-09-27 13:24:55.231470] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.394 [2024-09-27 13:24:55.231503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:53.652 [2024-09-27 13:24:55.388765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:53.652 Malloc0 00:19:53.652 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:53.653 [2024-09-27 13:24:55.436993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:53.653 [2024-09-27 13:24:55.461150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.653 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:53.911 [2024-09-27 13:24:55.659814] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:55.333 Initializing NVMe Controllers 00:19:55.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:55.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:55.333 Initialization complete. Launching workers. 00:19:55.333 ======================================================== 00:19:55.333 Latency(us) 00:19:55.334 Device Information : IOPS MiB/s Average min max 00:19:55.334 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 504.00 63.00 7992.50 6107.65 9907.39 00:19:55.334 ======================================================== 00:19:55.334 Total : 504.00 63.00 7992.50 6107.65 9907.39 00:19:55.334 00:19:55.334 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:55.334 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.334 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:55.334 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:55.334 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # nvmfcleanup 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@99 -- # sync 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@102 -- # set +e 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:55.334 rmmod nvme_tcp 00:19:55.334 rmmod nvme_fabrics 00:19:55.334 rmmod nvme_keyring 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@106 -- # set -e 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@107 -- # return 0 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # '[' -n 73651 ']' 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@333 -- # killprocess 73651 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 73651 ']' 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 73651 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73651 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:55.334 killing process with pid 73651 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73651' 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 73651 00:19:55.334 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 73651 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # nvmf_fini 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@264 -- # local dev 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:19:55.593 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # continue 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # continue 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # _dev=0 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # dev_map=() 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@284 -- # iptr 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@538 -- # iptables-save 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@538 -- # iptables-restore 00:19:55.852 00:19:55.852 real 0m3.223s 00:19:55.852 user 0m2.659s 00:19:55.852 sys 0m0.785s 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:55.852 ************************************ 00:19:55.852 END TEST nvmf_wait_for_buf 00:19:55.852 ************************************ 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:55.852 ************************************ 00:19:55.852 END TEST nvmf_target_extra 00:19:55.852 ************************************ 00:19:55.852 00:19:55.852 real 5m34.840s 00:19:55.852 user 12m2.545s 00:19:55.852 sys 1m9.316s 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:55.852 13:24:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:55.852 13:24:57 nvmf_tcp -- nvmf/nvmf.sh@12 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:55.852 13:24:57 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:55.852 13:24:57 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:55.852 13:24:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:55.852 ************************************ 00:19:55.852 START TEST nvmf_host 00:19:55.852 ************************************ 00:19:55.852 13:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:55.852 * Looking for test storage... 00:19:55.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:55.852 13:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:55.852 13:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:19:55.852 13:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:56.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.112 --rc genhtml_branch_coverage=1 00:19:56.112 --rc genhtml_function_coverage=1 00:19:56.112 --rc genhtml_legend=1 00:19:56.112 --rc geninfo_all_blocks=1 00:19:56.112 --rc geninfo_unexecuted_blocks=1 00:19:56.112 00:19:56.112 ' 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:56.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.112 --rc genhtml_branch_coverage=1 00:19:56.112 --rc genhtml_function_coverage=1 00:19:56.112 --rc genhtml_legend=1 00:19:56.112 --rc geninfo_all_blocks=1 00:19:56.112 --rc geninfo_unexecuted_blocks=1 00:19:56.112 00:19:56.112 ' 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:56.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.112 --rc genhtml_branch_coverage=1 00:19:56.112 --rc genhtml_function_coverage=1 00:19:56.112 --rc genhtml_legend=1 00:19:56.112 --rc geninfo_all_blocks=1 00:19:56.112 --rc geninfo_unexecuted_blocks=1 00:19:56.112 00:19:56.112 ' 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:56.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.112 --rc genhtml_branch_coverage=1 00:19:56.112 --rc genhtml_function_coverage=1 00:19:56.112 --rc genhtml_legend=1 00:19:56.112 --rc geninfo_all_blocks=1 00:19:56.112 --rc geninfo_unexecuted_blocks=1 00:19:56.112 00:19:56.112 ' 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:56.112 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:56.112 13:24:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@20 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.113 ************************************ 00:19:56.113 START TEST nvmf_identify 00:19:56.113 ************************************ 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:56.113 * Looking for test storage... 00:19:56.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:56.113 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.372 --rc genhtml_branch_coverage=1 00:19:56.372 --rc genhtml_function_coverage=1 00:19:56.372 --rc genhtml_legend=1 00:19:56.372 --rc geninfo_all_blocks=1 00:19:56.372 --rc geninfo_unexecuted_blocks=1 00:19:56.372 00:19:56.372 ' 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.372 --rc genhtml_branch_coverage=1 00:19:56.372 --rc genhtml_function_coverage=1 00:19:56.372 --rc genhtml_legend=1 00:19:56.372 --rc geninfo_all_blocks=1 00:19:56.372 --rc geninfo_unexecuted_blocks=1 00:19:56.372 00:19:56.372 ' 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.372 --rc genhtml_branch_coverage=1 00:19:56.372 --rc genhtml_function_coverage=1 00:19:56.372 --rc genhtml_legend=1 00:19:56.372 --rc geninfo_all_blocks=1 00:19:56.372 --rc geninfo_unexecuted_blocks=1 00:19:56.372 00:19:56.372 ' 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.372 --rc genhtml_branch_coverage=1 00:19:56.372 --rc genhtml_function_coverage=1 00:19:56.372 --rc genhtml_legend=1 00:19:56.372 --rc geninfo_all_blocks=1 00:19:56.372 --rc geninfo_unexecuted_blocks=1 00:19:56.372 00:19:56.372 ' 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.372 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:56.373 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # prepare_net_devs 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # local -g is_hw=no 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # remove_target_ns 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@276 -- # nvmf_veth_init 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@233 -- # create_target_ns 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:56.373 13:24:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@234 -- # create_main_bridge 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@114 -- # delete_main_bridge 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@130 -- # return 0 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:56.373 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@160 -- # set_up initiator0 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@160 -- # set_up target0 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set target0 up 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@161 -- # set_up target0_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_ns target0 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:19:56.374 10.0.0.1 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:56.374 10.0.0.2 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@75 -- # set_up initiator0 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@138 -- # set_up target0_br 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:19:56.374 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@160 -- # set_up initiator1 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@160 -- # set_up target1 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set target1 up 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@161 -- # set_up target1_br 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_ns target1 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:56.375 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772163 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:19:56.634 10.0.0.3 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772164 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:56.634 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:19:56.634 10.0.0.4 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@75 -- # set_up initiator1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@138 -- # set_up target1_br 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo initiator0 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=initiator0 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:56.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:19:56.635 00:19:56.635 --- 10.0.0.1 ping statistics --- 00:19:56.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.635 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo target0 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=target0 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:56.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:19:56.635 00:19:56.635 --- 10.0.0.2 ping statistics --- 00:19:56.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.635 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo initiator1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=initiator1 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:19:56.635 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:56.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:56.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:19:56.636 00:19:56.636 --- 10.0.0.3 ping statistics --- 00:19:56.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.636 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo target1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=target1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:19:56.636 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:56.636 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:19:56.636 00:19:56.636 --- 10.0.0.4 ping statistics --- 00:19:56.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.636 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # return 0 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo initiator0 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=initiator0 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo initiator1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=initiator1 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo target0 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=target0 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:56.636 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target1 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo target1 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=target1 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73965 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73965 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 73965 ']' 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:56.896 [2024-09-27 13:24:58.520490] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:56.896 [2024-09-27 13:24:58.520577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.896 [2024-09-27 13:24:58.654806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.896 [2024-09-27 13:24:58.727812] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.896 [2024-09-27 13:24:58.727878] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.896 [2024-09-27 13:24:58.727891] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.896 [2024-09-27 13:24:58.727901] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.896 [2024-09-27 13:24:58.727910] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.896 [2024-09-27 13:24:58.728080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.896 [2024-09-27 13:24:58.728509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.896 [2024-09-27 13:24:58.729669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.896 [2024-09-27 13:24:58.729733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.155 [2024-09-27 13:24:58.763459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 [2024-09-27 13:24:59.588560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 Malloc0 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 [2024-09-27 13:24:59.668292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.092 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 [ 00:19:58.092 { 00:19:58.092 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:58.093 "subtype": "Discovery", 00:19:58.093 "listen_addresses": [ 00:19:58.093 { 00:19:58.093 "trtype": "TCP", 00:19:58.093 "adrfam": "IPv4", 00:19:58.093 "traddr": "10.0.0.2", 00:19:58.093 "trsvcid": "4420" 00:19:58.093 } 00:19:58.093 ], 00:19:58.093 "allow_any_host": true, 00:19:58.093 "hosts": [] 00:19:58.093 }, 00:19:58.093 { 00:19:58.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.093 "subtype": "NVMe", 00:19:58.093 "listen_addresses": [ 00:19:58.093 { 00:19:58.093 "trtype": "TCP", 00:19:58.093 "adrfam": "IPv4", 00:19:58.093 "traddr": "10.0.0.2", 00:19:58.093 "trsvcid": "4420" 00:19:58.093 } 00:19:58.093 ], 00:19:58.093 "allow_any_host": true, 00:19:58.093 "hosts": [], 00:19:58.093 "serial_number": "SPDK00000000000001", 00:19:58.093 "model_number": "SPDK bdev Controller", 00:19:58.093 "max_namespaces": 32, 00:19:58.093 "min_cntlid": 1, 00:19:58.093 "max_cntlid": 65519, 00:19:58.093 "namespaces": [ 00:19:58.093 { 00:19:58.093 "nsid": 1, 00:19:58.093 "bdev_name": "Malloc0", 00:19:58.093 "name": "Malloc0", 00:19:58.093 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:58.093 "eui64": "ABCDEF0123456789", 00:19:58.093 "uuid": "813688fc-15c2-4a2d-937a-4d49a107ef91" 00:19:58.093 } 00:19:58.093 ] 00:19:58.093 } 00:19:58.093 ] 00:19:58.093 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.093 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:58.093 [2024-09-27 13:24:59.716738] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:58.093 [2024-09-27 13:24:59.716793] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74006 ] 00:19:58.093 [2024-09-27 13:24:59.860776] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:58.093 [2024-09-27 13:24:59.860847] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:58.093 [2024-09-27 13:24:59.860856] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:58.093 [2024-09-27 13:24:59.860870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:58.093 [2024-09-27 13:24:59.860881] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:58.093 [2024-09-27 13:24:59.861204] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:58.093 [2024-09-27 13:24:59.861278] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x54d750 0 00:19:58.093 [2024-09-27 13:24:59.868717] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:58.093 [2024-09-27 13:24:59.868743] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:58.093 [2024-09-27 13:24:59.868750] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:58.093 [2024-09-27 13:24:59.868754] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:58.093 [2024-09-27 13:24:59.868795] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.868803] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.868807] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x54d750) 00:19:58.093 [2024-09-27 13:24:59.868822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:58.093 [2024-09-27 13:24:59.868857] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1840, cid 0, qid 0 00:19:58.093 [2024-09-27 13:24:59.876700] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.093 [2024-09-27 13:24:59.876725] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.093 [2024-09-27 13:24:59.876732] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.876738] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1840) on tqpair=0x54d750 00:19:58.093 [2024-09-27 13:24:59.876756] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:58.093 [2024-09-27 13:24:59.876766] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:58.093 [2024-09-27 13:24:59.876773] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:58.093 [2024-09-27 13:24:59.876790] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.876796] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.876801] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x54d750) 00:19:58.093 [2024-09-27 13:24:59.876811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.093 [2024-09-27 13:24:59.876842] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1840, cid 0, qid 0 00:19:58.093 [2024-09-27 13:24:59.876910] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.093 [2024-09-27 13:24:59.876918] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.093 [2024-09-27 13:24:59.876922] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.876927] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1840) on tqpair=0x54d750 00:19:58.093 [2024-09-27 13:24:59.876933] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:58.093 [2024-09-27 13:24:59.876942] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:58.093 [2024-09-27 13:24:59.876950] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.876955] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.876959] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x54d750) 00:19:58.093 [2024-09-27 13:24:59.876967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.093 [2024-09-27 13:24:59.876989] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1840, cid 0, qid 0 00:19:58.093 [2024-09-27 13:24:59.877040] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.093 [2024-09-27 13:24:59.877048] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.093 [2024-09-27 13:24:59.877052] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.877057] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1840) on tqpair=0x54d750 00:19:58.093 [2024-09-27 13:24:59.877063] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:58.093 [2024-09-27 13:24:59.877072] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:58.093 [2024-09-27 13:24:59.877080] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.877085] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.877089] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x54d750) 00:19:58.093 [2024-09-27 13:24:59.877097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.093 [2024-09-27 13:24:59.877115] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1840, cid 0, qid 0 00:19:58.093 [2024-09-27 13:24:59.877161] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.093 [2024-09-27 13:24:59.877168] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.093 [2024-09-27 13:24:59.877173] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.877177] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1840) on tqpair=0x54d750 00:19:58.093 [2024-09-27 13:24:59.877183] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:58.093 [2024-09-27 13:24:59.877194] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.877199] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.877203] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x54d750) 00:19:58.093 [2024-09-27 13:24:59.877211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.093 [2024-09-27 13:24:59.877229] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1840, cid 0, qid 0 00:19:58.093 [2024-09-27 13:24:59.877277] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.093 [2024-09-27 13:24:59.877285] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.093 [2024-09-27 13:24:59.877289] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.877294] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1840) on tqpair=0x54d750 00:19:58.093 [2024-09-27 13:24:59.877299] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:58.093 [2024-09-27 13:24:59.877305] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:58.093 [2024-09-27 13:24:59.877313] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:58.093 [2024-09-27 13:24:59.877419] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:58.093 [2024-09-27 13:24:59.877425] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:58.093 [2024-09-27 13:24:59.877435] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.877440] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.877444] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x54d750) 00:19:58.093 [2024-09-27 13:24:59.877452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.093 [2024-09-27 13:24:59.877471] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1840, cid 0, qid 0 00:19:58.093 [2024-09-27 13:24:59.877525] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.093 [2024-09-27 13:24:59.877533] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.093 [2024-09-27 13:24:59.877537] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.093 [2024-09-27 13:24:59.877542] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1840) on tqpair=0x54d750 00:19:58.093 [2024-09-27 13:24:59.877547] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:58.093 [2024-09-27 13:24:59.877558] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.877563] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.877567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x54d750) 00:19:58.094 [2024-09-27 13:24:59.877575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.094 [2024-09-27 13:24:59.877593] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1840, cid 0, qid 0 00:19:58.094 [2024-09-27 13:24:59.877641] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.094 [2024-09-27 13:24:59.877649] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.094 [2024-09-27 13:24:59.877653] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.877657] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1840) on tqpair=0x54d750 00:19:58.094 [2024-09-27 13:24:59.877663] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:58.094 [2024-09-27 13:24:59.877668] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:58.094 [2024-09-27 13:24:59.877677] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:58.094 [2024-09-27 13:24:59.877718] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:58.094 [2024-09-27 13:24:59.877731] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.877736] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x54d750) 00:19:58.094 [2024-09-27 13:24:59.877744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.094 [2024-09-27 13:24:59.877767] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1840, cid 0, qid 0 00:19:58.094 [2024-09-27 13:24:59.877857] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.094 [2024-09-27 13:24:59.877865] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.094 [2024-09-27 13:24:59.877870] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.877874] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x54d750): datao=0, datal=4096, cccid=0 00:19:58.094 [2024-09-27 13:24:59.877879] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5b1840) on tqpair(0x54d750): expected_datao=0, payload_size=4096 00:19:58.094 [2024-09-27 13:24:59.877885] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.877893] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.877899] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.877908] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.094 [2024-09-27 13:24:59.877915] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.094 [2024-09-27 13:24:59.877919] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.877923] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1840) on tqpair=0x54d750 00:19:58.094 [2024-09-27 13:24:59.877933] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:58.094 [2024-09-27 13:24:59.877939] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:58.094 [2024-09-27 13:24:59.877944] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:58.094 [2024-09-27 13:24:59.877950] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:58.094 [2024-09-27 13:24:59.877956] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:58.094 [2024-09-27 13:24:59.877962] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:58.094 [2024-09-27 13:24:59.877971] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:58.094 [2024-09-27 13:24:59.877985] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.877990] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.877994] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x54d750) 00:19:58.094 [2024-09-27 13:24:59.878003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.094 [2024-09-27 13:24:59.878024] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1840, cid 0, qid 0 00:19:58.094 [2024-09-27 13:24:59.878091] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.094 [2024-09-27 13:24:59.878098] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.094 [2024-09-27 13:24:59.878103] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.878107] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1840) on tqpair=0x54d750 00:19:58.094 [2024-09-27 13:24:59.878116] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.878121] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.878125] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x54d750) 00:19:58.094 [2024-09-27 13:24:59.878132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.094 [2024-09-27 13:24:59.878139] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.878143] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.878147] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x54d750) 00:19:58.094 [2024-09-27 13:24:59.878154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.094 [2024-09-27 13:24:59.878161] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.878165] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.878169] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x54d750) 00:19:58.094 [2024-09-27 13:24:59.878175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.094 [2024-09-27 13:24:59.878182] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.878186] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.878190] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x54d750) 00:19:58.094 [2024-09-27 13:24:59.878196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.094 [2024-09-27 13:24:59.878202] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:58.094 [2024-09-27 13:24:59.878216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:58.094 [2024-09-27 13:24:59.878224] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.094 [2024-09-27 13:24:59.878229] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x54d750) 00:19:58.094 [2024-09-27 13:24:59.878236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.094 [2024-09-27 13:24:59.878258] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1840, cid 0, qid 0 00:19:58.094 [2024-09-27 13:24:59.878266] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b19c0, cid 1, qid 0 00:19:58.094 [2024-09-27 13:24:59.878271] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1b40, cid 2, qid 0 00:19:58.122 [2024-09-27 13:24:59.878276] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1cc0, cid 3, qid 0 00:19:58.123 [2024-09-27 13:24:59.878281] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1e40, cid 4, qid 0 00:19:58.123 [2024-09-27 13:24:59.878367] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.123 [2024-09-27 13:24:59.878375] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.123 [2024-09-27 13:24:59.878379] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878384] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1e40) on tqpair=0x54d750 00:19:58.123 [2024-09-27 13:24:59.878390] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:58.123 [2024-09-27 13:24:59.878396] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:58.123 [2024-09-27 13:24:59.878408] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878414] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x54d750) 00:19:58.123 [2024-09-27 13:24:59.878421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.123 [2024-09-27 13:24:59.878440] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1e40, cid 4, qid 0 00:19:58.123 [2024-09-27 13:24:59.878502] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.123 [2024-09-27 13:24:59.878509] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.123 [2024-09-27 13:24:59.878513] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878518] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x54d750): datao=0, datal=4096, cccid=4 00:19:58.123 [2024-09-27 13:24:59.878523] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5b1e40) on tqpair(0x54d750): expected_datao=0, payload_size=4096 00:19:58.123 [2024-09-27 13:24:59.878528] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878536] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878540] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878549] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.123 [2024-09-27 13:24:59.878556] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.123 [2024-09-27 13:24:59.878560] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878565] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1e40) on tqpair=0x54d750 00:19:58.123 [2024-09-27 13:24:59.878578] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:58.123 [2024-09-27 13:24:59.878609] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878615] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x54d750) 00:19:58.123 [2024-09-27 13:24:59.878624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.123 [2024-09-27 13:24:59.878632] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878636] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878640] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x54d750) 00:19:58.123 [2024-09-27 13:24:59.878647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.123 [2024-09-27 13:24:59.878672] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1e40, cid 4, qid 0 00:19:58.123 [2024-09-27 13:24:59.878703] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1fc0, cid 5, qid 0 00:19:58.123 [2024-09-27 13:24:59.878800] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.123 [2024-09-27 13:24:59.878809] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.123 [2024-09-27 13:24:59.878813] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878817] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x54d750): datao=0, datal=1024, cccid=4 00:19:58.123 [2024-09-27 13:24:59.878822] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5b1e40) on tqpair(0x54d750): expected_datao=0, payload_size=1024 00:19:58.123 [2024-09-27 13:24:59.878827] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878835] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878839] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878846] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.123 [2024-09-27 13:24:59.878852] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.123 [2024-09-27 13:24:59.878856] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878861] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1fc0) on tqpair=0x54d750 00:19:58.123 [2024-09-27 13:24:59.878882] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.123 [2024-09-27 13:24:59.878891] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.123 [2024-09-27 13:24:59.878895] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878899] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1e40) on tqpair=0x54d750 00:19:58.123 [2024-09-27 13:24:59.878912] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.878918] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x54d750) 00:19:58.123 [2024-09-27 13:24:59.878926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.123 [2024-09-27 13:24:59.878953] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1e40, cid 4, qid 0 00:19:58.123 [2024-09-27 13:24:59.879035] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.123 [2024-09-27 13:24:59.879044] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.123 [2024-09-27 13:24:59.879049] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.879053] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x54d750): datao=0, datal=3072, cccid=4 00:19:58.123 [2024-09-27 13:24:59.879058] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5b1e40) on tqpair(0x54d750): expected_datao=0, payload_size=3072 00:19:58.123 [2024-09-27 13:24:59.879063] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.879071] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.879075] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.879085] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.123 [2024-09-27 13:24:59.879092] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.123 [2024-09-27 13:24:59.879096] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.879100] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1e40) on tqpair=0x54d750 00:19:58.123 [2024-09-27 13:24:59.879112] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.879117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x54d750) 00:19:58.123 [2024-09-27 13:24:59.879125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.123 [2024-09-27 13:24:59.879152] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1e40, cid 4, qid 0 00:19:58.123 [2024-09-27 13:24:59.879218] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.123 [2024-09-27 13:24:59.879226] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.123 [2024-09-27 13:24:59.879230] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.879234] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x54d750): datao=0, datal=8, cccid=4 00:19:58.123 [2024-09-27 13:24:59.879239] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5b1e40) on tqpair(0x54d750): expected_datao=0, payload_size=8 00:19:58.123 [2024-09-27 13:24:59.879244] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.879252] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.879256] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.879272] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.123 [2024-09-27 13:24:59.879280] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.123 [2024-09-27 13:24:59.879285] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.123 [2024-09-27 13:24:59.879289] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1e40) on tqpair=0x54d750 00:19:58.123 ===================================================== 00:19:58.123 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:58.123 ===================================================== 00:19:58.123 Controller Capabilities/Features 00:19:58.123 ================================ 00:19:58.123 Vendor ID: 0000 00:19:58.123 Subsystem Vendor ID: 0000 00:19:58.123 Serial Number: .................... 00:19:58.123 Model Number: ........................................ 00:19:58.123 Firmware Version: 25.01 00:19:58.123 Recommended Arb Burst: 0 00:19:58.123 IEEE OUI Identifier: 00 00 00 00:19:58.123 Multi-path I/O 00:19:58.123 May have multiple subsystem ports: No 00:19:58.123 May have multiple controllers: No 00:19:58.123 Associated with SR-IOV VF: No 00:19:58.123 Max Data Transfer Size: 131072 00:19:58.123 Max Number of Namespaces: 0 00:19:58.123 Max Number of I/O Queues: 1024 00:19:58.123 NVMe Specification Version (VS): 1.3 00:19:58.123 NVMe Specification Version (Identify): 1.3 00:19:58.123 Maximum Queue Entries: 128 00:19:58.123 Contiguous Queues Required: Yes 00:19:58.123 Arbitration Mechanisms Supported 00:19:58.123 Weighted Round Robin: Not Supported 00:19:58.123 Vendor Specific: Not Supported 00:19:58.123 Reset Timeout: 15000 ms 00:19:58.123 Doorbell Stride: 4 bytes 00:19:58.123 NVM Subsystem Reset: Not Supported 00:19:58.123 Command Sets Supported 00:19:58.123 NVM Command Set: Supported 00:19:58.123 Boot Partition: Not Supported 00:19:58.123 Memory Page Size Minimum: 4096 bytes 00:19:58.123 Memory Page Size Maximum: 4096 bytes 00:19:58.123 Persistent Memory Region: Not Supported 00:19:58.123 Optional Asynchronous Events Supported 00:19:58.123 Namespace Attribute Notices: Not Supported 00:19:58.123 Firmware Activation Notices: Not Supported 00:19:58.124 ANA Change Notices: Not Supported 00:19:58.124 PLE Aggregate Log Change Notices: Not Supported 00:19:58.124 LBA Status Info Alert Notices: Not Supported 00:19:58.124 EGE Aggregate Log Change Notices: Not Supported 00:19:58.124 Normal NVM Subsystem Shutdown event: Not Supported 00:19:58.124 Zone Descriptor Change Notices: Not Supported 00:19:58.124 Discovery Log Change Notices: Supported 00:19:58.124 Controller Attributes 00:19:58.124 128-bit Host Identifier: Not Supported 00:19:58.124 Non-Operational Permissive Mode: Not Supported 00:19:58.124 NVM Sets: Not Supported 00:19:58.124 Read Recovery Levels: Not Supported 00:19:58.124 Endurance Groups: Not Supported 00:19:58.124 Predictable Latency Mode: Not Supported 00:19:58.124 Traffic Based Keep ALive: Not Supported 00:19:58.124 Namespace Granularity: Not Supported 00:19:58.124 SQ Associations: Not Supported 00:19:58.124 UUID List: Not Supported 00:19:58.124 Multi-Domain Subsystem: Not Supported 00:19:58.124 Fixed Capacity Management: Not Supported 00:19:58.124 Variable Capacity Management: Not Supported 00:19:58.124 Delete Endurance Group: Not Supported 00:19:58.124 Delete NVM Set: Not Supported 00:19:58.124 Extended LBA Formats Supported: Not Supported 00:19:58.124 Flexible Data Placement Supported: Not Supported 00:19:58.124 00:19:58.124 Controller Memory Buffer Support 00:19:58.124 ================================ 00:19:58.124 Supported: No 00:19:58.124 00:19:58.124 Persistent Memory Region Support 00:19:58.124 ================================ 00:19:58.124 Supported: No 00:19:58.124 00:19:58.124 Admin Command Set Attributes 00:19:58.124 ============================ 00:19:58.124 Security Send/Receive: Not Supported 00:19:58.124 Format NVM: Not Supported 00:19:58.124 Firmware Activate/Download: Not Supported 00:19:58.124 Namespace Management: Not Supported 00:19:58.124 Device Self-Test: Not Supported 00:19:58.124 Directives: Not Supported 00:19:58.124 NVMe-MI: Not Supported 00:19:58.124 Virtualization Management: Not Supported 00:19:58.124 Doorbell Buffer Config: Not Supported 00:19:58.124 Get LBA Status Capability: Not Supported 00:19:58.124 Command & Feature Lockdown Capability: Not Supported 00:19:58.124 Abort Command Limit: 1 00:19:58.124 Async Event Request Limit: 4 00:19:58.124 Number of Firmware Slots: N/A 00:19:58.124 Firmware Slot 1 Read-Only: N/A 00:19:58.124 Firmware Activation Without Reset: N/A 00:19:58.124 Multiple Update Detection Support: N/A 00:19:58.124 Firmware Update Granularity: No Information Provided 00:19:58.124 Per-Namespace SMART Log: No 00:19:58.124 Asymmetric Namespace Access Log Page: Not Supported 00:19:58.124 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:58.124 Command Effects Log Page: Not Supported 00:19:58.124 Get Log Page Extended Data: Supported 00:19:58.124 Telemetry Log Pages: Not Supported 00:19:58.124 Persistent Event Log Pages: Not Supported 00:19:58.124 Supported Log Pages Log Page: May Support 00:19:58.124 Commands Supported & Effects Log Page: Not Supported 00:19:58.124 Feature Identifiers & Effects Log Page:May Support 00:19:58.124 NVMe-MI Commands & Effects Log Page: May Support 00:19:58.124 Data Area 4 for Telemetry Log: Not Supported 00:19:58.124 Error Log Page Entries Supported: 128 00:19:58.124 Keep Alive: Not Supported 00:19:58.124 00:19:58.124 NVM Command Set Attributes 00:19:58.124 ========================== 00:19:58.124 Submission Queue Entry Size 00:19:58.124 Max: 1 00:19:58.124 Min: 1 00:19:58.124 Completion Queue Entry Size 00:19:58.124 Max: 1 00:19:58.124 Min: 1 00:19:58.124 Number of Namespaces: 0 00:19:58.124 Compare Command: Not Supported 00:19:58.124 Write Uncorrectable Command: Not Supported 00:19:58.124 Dataset Management Command: Not Supported 00:19:58.124 Write Zeroes Command: Not Supported 00:19:58.124 Set Features Save Field: Not Supported 00:19:58.124 Reservations: Not Supported 00:19:58.124 Timestamp: Not Supported 00:19:58.124 Copy: Not Supported 00:19:58.124 Volatile Write Cache: Not Present 00:19:58.124 Atomic Write Unit (Normal): 1 00:19:58.124 Atomic Write Unit (PFail): 1 00:19:58.124 Atomic Compare & Write Unit: 1 00:19:58.124 Fused Compare & Write: Supported 00:19:58.124 Scatter-Gather List 00:19:58.124 SGL Command Set: Supported 00:19:58.124 SGL Keyed: Supported 00:19:58.124 SGL Bit Bucket Descriptor: Not Supported 00:19:58.124 SGL Metadata Pointer: Not Supported 00:19:58.124 Oversized SGL: Not Supported 00:19:58.124 SGL Metadata Address: Not Supported 00:19:58.124 SGL Offset: Supported 00:19:58.124 Transport SGL Data Block: Not Supported 00:19:58.124 Replay Protected Memory Block: Not Supported 00:19:58.124 00:19:58.124 Firmware Slot Information 00:19:58.124 ========================= 00:19:58.124 Active slot: 0 00:19:58.124 00:19:58.124 00:19:58.124 Error Log 00:19:58.124 ========= 00:19:58.124 00:19:58.124 Active Namespaces 00:19:58.124 ================= 00:19:58.124 Discovery Log Page 00:19:58.124 ================== 00:19:58.124 Generation Counter: 2 00:19:58.124 Number of Records: 2 00:19:58.124 Record Format: 0 00:19:58.124 00:19:58.124 Discovery Log Entry 0 00:19:58.124 ---------------------- 00:19:58.124 Transport Type: 3 (TCP) 00:19:58.124 Address Family: 1 (IPv4) 00:19:58.124 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:58.124 Entry Flags: 00:19:58.124 Duplicate Returned Information: 1 00:19:58.124 Explicit Persistent Connection Support for Discovery: 1 00:19:58.124 Transport Requirements: 00:19:58.124 Secure Channel: Not Required 00:19:58.124 Port ID: 0 (0x0000) 00:19:58.124 Controller ID: 65535 (0xffff) 00:19:58.124 Admin Max SQ Size: 128 00:19:58.124 Transport Service Identifier: 4420 00:19:58.124 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:58.124 Transport Address: 10.0.0.2 00:19:58.124 Discovery Log Entry 1 00:19:58.124 ---------------------- 00:19:58.124 Transport Type: 3 (TCP) 00:19:58.124 Address Family: 1 (IPv4) 00:19:58.124 Subsystem Type: 2 (NVM Subsystem) 00:19:58.124 Entry Flags: 00:19:58.124 Duplicate Returned Information: 0 00:19:58.124 Explicit Persistent Connection Support for Discovery: 0 00:19:58.124 Transport Requirements: 00:19:58.124 Secure Channel: Not Required 00:19:58.124 Port ID: 0 (0x0000) 00:19:58.124 Controller ID: 65535 (0xffff) 00:19:58.124 Admin Max SQ Size: 128 00:19:58.124 Transport Service Identifier: 4420 00:19:58.124 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:58.124 Transport Address: 10.0.0.2 [2024-09-27 13:24:59.879410] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:58.124 [2024-09-27 13:24:59.879430] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1840) on tqpair=0x54d750 00:19:58.124 [2024-09-27 13:24:59.879438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.124 [2024-09-27 13:24:59.879445] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b19c0) on tqpair=0x54d750 00:19:58.124 [2024-09-27 13:24:59.879450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.124 [2024-09-27 13:24:59.879456] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1b40) on tqpair=0x54d750 00:19:58.124 [2024-09-27 13:24:59.879461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.124 [2024-09-27 13:24:59.879467] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1cc0) on tqpair=0x54d750 00:19:58.124 [2024-09-27 13:24:59.879472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.124 [2024-09-27 13:24:59.879482] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.124 [2024-09-27 13:24:59.879487] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.124 [2024-09-27 13:24:59.879491] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x54d750) 00:19:58.124 [2024-09-27 13:24:59.879500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.124 [2024-09-27 13:24:59.879527] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1cc0, cid 3, qid 0 00:19:58.124 [2024-09-27 13:24:59.879590] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.124 [2024-09-27 13:24:59.879598] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.124 [2024-09-27 13:24:59.879603] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.124 [2024-09-27 13:24:59.879607] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1cc0) on tqpair=0x54d750 00:19:58.124 [2024-09-27 13:24:59.879616] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.124 [2024-09-27 13:24:59.879621] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.124 [2024-09-27 13:24:59.879625] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x54d750) 00:19:58.124 [2024-09-27 13:24:59.879632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.124 [2024-09-27 13:24:59.879654] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1cc0, cid 3, qid 0 00:19:58.124 [2024-09-27 13:24:59.879748] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.125 [2024-09-27 13:24:59.879759] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.125 [2024-09-27 13:24:59.879763] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.879767] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1cc0) on tqpair=0x54d750 00:19:58.125 [2024-09-27 13:24:59.879773] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:58.125 [2024-09-27 13:24:59.879778] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:58.125 [2024-09-27 13:24:59.879789] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.879794] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.879799] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x54d750) 00:19:58.125 [2024-09-27 13:24:59.879807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.125 [2024-09-27 13:24:59.879828] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1cc0, cid 3, qid 0 00:19:58.125 [2024-09-27 13:24:59.879875] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.125 [2024-09-27 13:24:59.879883] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.125 [2024-09-27 13:24:59.879887] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.879892] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1cc0) on tqpair=0x54d750 00:19:58.125 [2024-09-27 13:24:59.879904] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.879909] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.879913] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x54d750) 00:19:58.125 [2024-09-27 13:24:59.879921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.125 [2024-09-27 13:24:59.879938] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1cc0, cid 3, qid 0 00:19:58.125 [2024-09-27 13:24:59.879982] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.125 [2024-09-27 13:24:59.879989] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.125 [2024-09-27 13:24:59.879994] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.879998] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1cc0) on tqpair=0x54d750 00:19:58.125 [2024-09-27 13:24:59.880009] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880014] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880018] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x54d750) 00:19:58.125 [2024-09-27 13:24:59.880026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.125 [2024-09-27 13:24:59.880044] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1cc0, cid 3, qid 0 00:19:58.125 [2024-09-27 13:24:59.880087] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.125 [2024-09-27 13:24:59.880095] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.125 [2024-09-27 13:24:59.880099] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880104] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1cc0) on tqpair=0x54d750 00:19:58.125 [2024-09-27 13:24:59.880115] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880120] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880124] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x54d750) 00:19:58.125 [2024-09-27 13:24:59.880131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.125 [2024-09-27 13:24:59.880149] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1cc0, cid 3, qid 0 00:19:58.125 [2024-09-27 13:24:59.880197] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.125 [2024-09-27 13:24:59.880205] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.125 [2024-09-27 13:24:59.880209] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880213] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1cc0) on tqpair=0x54d750 00:19:58.125 [2024-09-27 13:24:59.880224] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880229] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880233] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x54d750) 00:19:58.125 [2024-09-27 13:24:59.880241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.125 [2024-09-27 13:24:59.880258] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1cc0, cid 3, qid 0 00:19:58.125 [2024-09-27 13:24:59.880302] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.125 [2024-09-27 13:24:59.880310] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.125 [2024-09-27 13:24:59.880314] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880319] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1cc0) on tqpair=0x54d750 00:19:58.125 [2024-09-27 13:24:59.880330] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880335] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880339] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x54d750) 00:19:58.125 [2024-09-27 13:24:59.880346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.125 [2024-09-27 13:24:59.880364] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1cc0, cid 3, qid 0 00:19:58.125 [2024-09-27 13:24:59.880412] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.125 [2024-09-27 13:24:59.880420] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.125 [2024-09-27 13:24:59.880424] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880428] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1cc0) on tqpair=0x54d750 00:19:58.125 [2024-09-27 13:24:59.880439] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880444] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880448] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x54d750) 00:19:58.125 [2024-09-27 13:24:59.880456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.125 [2024-09-27 13:24:59.880474] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1cc0, cid 3, qid 0 00:19:58.125 [2024-09-27 13:24:59.880553] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.125 [2024-09-27 13:24:59.880561] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.125 [2024-09-27 13:24:59.880566] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880570] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1cc0) on tqpair=0x54d750 00:19:58.125 [2024-09-27 13:24:59.880581] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880587] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.880591] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x54d750) 00:19:58.125 [2024-09-27 13:24:59.880599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.125 [2024-09-27 13:24:59.880617] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1cc0, cid 3, qid 0 00:19:58.125 [2024-09-27 13:24:59.883741] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.125 [2024-09-27 13:24:59.883780] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.125 [2024-09-27 13:24:59.883787] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.883792] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1cc0) on tqpair=0x54d750 00:19:58.125 [2024-09-27 13:24:59.883809] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.883815] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.883820] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x54d750) 00:19:58.125 [2024-09-27 13:24:59.883829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.125 [2024-09-27 13:24:59.883859] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5b1cc0, cid 3, qid 0 00:19:58.125 [2024-09-27 13:24:59.883957] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.125 [2024-09-27 13:24:59.883965] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.125 [2024-09-27 13:24:59.883969] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.125 [2024-09-27 13:24:59.883974] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5b1cc0) on tqpair=0x54d750 00:19:58.125 [2024-09-27 13:24:59.883983] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:19:58.125 00:19:58.125 13:24:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:58.125 [2024-09-27 13:24:59.928306] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:58.126 [2024-09-27 13:24:59.928361] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74008 ] 00:19:58.389 [2024-09-27 13:25:00.070244] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:58.389 [2024-09-27 13:25:00.070328] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:58.389 [2024-09-27 13:25:00.070337] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:58.389 [2024-09-27 13:25:00.070351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:58.389 [2024-09-27 13:25:00.070361] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:58.389 [2024-09-27 13:25:00.070707] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:58.389 [2024-09-27 13:25:00.070775] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1865750 0 00:19:58.389 [2024-09-27 13:25:00.077707] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:58.389 [2024-09-27 13:25:00.077734] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:58.389 [2024-09-27 13:25:00.077741] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:58.389 [2024-09-27 13:25:00.077745] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:58.389 [2024-09-27 13:25:00.077787] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.389 [2024-09-27 13:25:00.077796] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.389 [2024-09-27 13:25:00.077801] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1865750) 00:19:58.389 [2024-09-27 13:25:00.077816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:58.389 [2024-09-27 13:25:00.077851] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9840, cid 0, qid 0 00:19:58.389 [2024-09-27 13:25:00.085699] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.389 [2024-09-27 13:25:00.085724] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.389 [2024-09-27 13:25:00.085730] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.389 [2024-09-27 13:25:00.085736] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9840) on tqpair=0x1865750 00:19:58.389 [2024-09-27 13:25:00.085752] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:58.389 [2024-09-27 13:25:00.085762] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:58.389 [2024-09-27 13:25:00.085769] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:58.389 [2024-09-27 13:25:00.085786] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.389 [2024-09-27 13:25:00.085792] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.389 [2024-09-27 13:25:00.085796] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1865750) 00:19:58.389 [2024-09-27 13:25:00.085807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.389 [2024-09-27 13:25:00.085836] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9840, cid 0, qid 0 00:19:58.389 [2024-09-27 13:25:00.085905] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.389 [2024-09-27 13:25:00.085913] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.389 [2024-09-27 13:25:00.085917] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.389 [2024-09-27 13:25:00.085922] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9840) on tqpair=0x1865750 00:19:58.389 [2024-09-27 13:25:00.085928] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:58.389 [2024-09-27 13:25:00.085948] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:58.389 [2024-09-27 13:25:00.085956] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.389 [2024-09-27 13:25:00.085961] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.389 [2024-09-27 13:25:00.085965] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1865750) 00:19:58.389 [2024-09-27 13:25:00.085973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.389 [2024-09-27 13:25:00.085994] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9840, cid 0, qid 0 00:19:58.389 [2024-09-27 13:25:00.086050] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.389 [2024-09-27 13:25:00.086058] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.389 [2024-09-27 13:25:00.086062] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.389 [2024-09-27 13:25:00.086066] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9840) on tqpair=0x1865750 00:19:58.389 [2024-09-27 13:25:00.086072] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:58.389 [2024-09-27 13:25:00.086082] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:58.390 [2024-09-27 13:25:00.086090] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086094] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086098] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1865750) 00:19:58.390 [2024-09-27 13:25:00.086106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.390 [2024-09-27 13:25:00.086125] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9840, cid 0, qid 0 00:19:58.390 [2024-09-27 13:25:00.086182] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.390 [2024-09-27 13:25:00.086190] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.390 [2024-09-27 13:25:00.086194] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086198] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9840) on tqpair=0x1865750 00:19:58.390 [2024-09-27 13:25:00.086204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:58.390 [2024-09-27 13:25:00.086215] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086220] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086224] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1865750) 00:19:58.390 [2024-09-27 13:25:00.086232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.390 [2024-09-27 13:25:00.086250] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9840, cid 0, qid 0 00:19:58.390 [2024-09-27 13:25:00.086304] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.390 [2024-09-27 13:25:00.086312] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.390 [2024-09-27 13:25:00.086316] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086320] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9840) on tqpair=0x1865750 00:19:58.390 [2024-09-27 13:25:00.086325] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:58.390 [2024-09-27 13:25:00.086331] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:58.390 [2024-09-27 13:25:00.086340] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:58.390 [2024-09-27 13:25:00.086446] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:58.390 [2024-09-27 13:25:00.086451] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:58.390 [2024-09-27 13:25:00.086461] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086466] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086470] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1865750) 00:19:58.390 [2024-09-27 13:25:00.086477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.390 [2024-09-27 13:25:00.086497] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9840, cid 0, qid 0 00:19:58.390 [2024-09-27 13:25:00.086554] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.390 [2024-09-27 13:25:00.086562] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.390 [2024-09-27 13:25:00.086566] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086570] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9840) on tqpair=0x1865750 00:19:58.390 [2024-09-27 13:25:00.086576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:58.390 [2024-09-27 13:25:00.086587] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086592] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086596] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1865750) 00:19:58.390 [2024-09-27 13:25:00.086604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.390 [2024-09-27 13:25:00.086622] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9840, cid 0, qid 0 00:19:58.390 [2024-09-27 13:25:00.086695] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.390 [2024-09-27 13:25:00.086708] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.390 [2024-09-27 13:25:00.086713] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086718] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9840) on tqpair=0x1865750 00:19:58.390 [2024-09-27 13:25:00.086723] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:58.390 [2024-09-27 13:25:00.086730] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:58.390 [2024-09-27 13:25:00.086739] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:58.390 [2024-09-27 13:25:00.086756] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:58.390 [2024-09-27 13:25:00.086768] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086773] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1865750) 00:19:58.390 [2024-09-27 13:25:00.086781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.390 [2024-09-27 13:25:00.086805] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9840, cid 0, qid 0 00:19:58.390 [2024-09-27 13:25:00.086921] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.390 [2024-09-27 13:25:00.086929] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.390 [2024-09-27 13:25:00.086933] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086937] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1865750): datao=0, datal=4096, cccid=0 00:19:58.390 [2024-09-27 13:25:00.086943] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18c9840) on tqpair(0x1865750): expected_datao=0, payload_size=4096 00:19:58.390 [2024-09-27 13:25:00.086948] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086957] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086961] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086970] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.390 [2024-09-27 13:25:00.086977] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.390 [2024-09-27 13:25:00.086981] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.086985] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9840) on tqpair=0x1865750 00:19:58.390 [2024-09-27 13:25:00.086995] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:58.390 [2024-09-27 13:25:00.087000] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:58.390 [2024-09-27 13:25:00.087005] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:58.390 [2024-09-27 13:25:00.087010] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:58.390 [2024-09-27 13:25:00.087016] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:58.390 [2024-09-27 13:25:00.087021] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:58.390 [2024-09-27 13:25:00.087043] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:58.390 [2024-09-27 13:25:00.087057] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.087062] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.390 [2024-09-27 13:25:00.087066] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1865750) 00:19:58.390 [2024-09-27 13:25:00.087075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.390 [2024-09-27 13:25:00.087097] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9840, cid 0, qid 0 00:19:58.390 [2024-09-27 13:25:00.087161] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.391 [2024-09-27 13:25:00.087169] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.391 [2024-09-27 13:25:00.087173] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087177] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9840) on tqpair=0x1865750 00:19:58.391 [2024-09-27 13:25:00.087186] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087191] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087195] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1865750) 00:19:58.391 [2024-09-27 13:25:00.087202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.391 [2024-09-27 13:25:00.087209] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087213] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087217] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1865750) 00:19:58.391 [2024-09-27 13:25:00.087224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.391 [2024-09-27 13:25:00.087230] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087235] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087239] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1865750) 00:19:58.391 [2024-09-27 13:25:00.087245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.391 [2024-09-27 13:25:00.087252] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087256] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087260] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.391 [2024-09-27 13:25:00.087266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.391 [2024-09-27 13:25:00.087272] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:58.391 [2024-09-27 13:25:00.087285] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:58.391 [2024-09-27 13:25:00.087294] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087298] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1865750) 00:19:58.391 [2024-09-27 13:25:00.087306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.391 [2024-09-27 13:25:00.087327] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9840, cid 0, qid 0 00:19:58.391 [2024-09-27 13:25:00.087335] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c99c0, cid 1, qid 0 00:19:58.391 [2024-09-27 13:25:00.087341] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9b40, cid 2, qid 0 00:19:58.391 [2024-09-27 13:25:00.087346] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.391 [2024-09-27 13:25:00.087351] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9e40, cid 4, qid 0 00:19:58.391 [2024-09-27 13:25:00.087465] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.391 [2024-09-27 13:25:00.087483] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.391 [2024-09-27 13:25:00.087488] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087493] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9e40) on tqpair=0x1865750 00:19:58.391 [2024-09-27 13:25:00.087499] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:58.391 [2024-09-27 13:25:00.087505] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:58.391 [2024-09-27 13:25:00.087520] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:58.391 [2024-09-27 13:25:00.087528] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:58.391 [2024-09-27 13:25:00.087536] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087540] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087544] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1865750) 00:19:58.391 [2024-09-27 13:25:00.087552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.391 [2024-09-27 13:25:00.087574] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9e40, cid 4, qid 0 00:19:58.391 [2024-09-27 13:25:00.087638] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.391 [2024-09-27 13:25:00.087645] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.391 [2024-09-27 13:25:00.087649] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087654] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9e40) on tqpair=0x1865750 00:19:58.391 [2024-09-27 13:25:00.087738] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:58.391 [2024-09-27 13:25:00.087754] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:58.391 [2024-09-27 13:25:00.087763] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087767] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1865750) 00:19:58.391 [2024-09-27 13:25:00.087775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.391 [2024-09-27 13:25:00.087799] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9e40, cid 4, qid 0 00:19:58.391 [2024-09-27 13:25:00.087875] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.391 [2024-09-27 13:25:00.087883] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.391 [2024-09-27 13:25:00.087887] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087891] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1865750): datao=0, datal=4096, cccid=4 00:19:58.391 [2024-09-27 13:25:00.087896] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18c9e40) on tqpair(0x1865750): expected_datao=0, payload_size=4096 00:19:58.391 [2024-09-27 13:25:00.087902] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087910] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087914] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087923] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.391 [2024-09-27 13:25:00.087930] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.391 [2024-09-27 13:25:00.087933] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087938] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9e40) on tqpair=0x1865750 00:19:58.391 [2024-09-27 13:25:00.087957] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:58.391 [2024-09-27 13:25:00.087969] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:58.391 [2024-09-27 13:25:00.087980] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:58.391 [2024-09-27 13:25:00.087989] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.087994] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1865750) 00:19:58.391 [2024-09-27 13:25:00.088002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.391 [2024-09-27 13:25:00.088023] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9e40, cid 4, qid 0 00:19:58.391 [2024-09-27 13:25:00.088106] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.391 [2024-09-27 13:25:00.088113] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.391 [2024-09-27 13:25:00.088118] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.391 [2024-09-27 13:25:00.088122] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1865750): datao=0, datal=4096, cccid=4 00:19:58.392 [2024-09-27 13:25:00.088127] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18c9e40) on tqpair(0x1865750): expected_datao=0, payload_size=4096 00:19:58.392 [2024-09-27 13:25:00.088132] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088140] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088144] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088153] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.392 [2024-09-27 13:25:00.088159] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.392 [2024-09-27 13:25:00.088163] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088167] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9e40) on tqpair=0x1865750 00:19:58.392 [2024-09-27 13:25:00.088179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:58.392 [2024-09-27 13:25:00.088190] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:58.392 [2024-09-27 13:25:00.088199] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088204] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1865750) 00:19:58.392 [2024-09-27 13:25:00.088212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.392 [2024-09-27 13:25:00.088232] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9e40, cid 4, qid 0 00:19:58.392 [2024-09-27 13:25:00.088303] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.392 [2024-09-27 13:25:00.088310] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.392 [2024-09-27 13:25:00.088314] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088318] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1865750): datao=0, datal=4096, cccid=4 00:19:58.392 [2024-09-27 13:25:00.088323] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18c9e40) on tqpair(0x1865750): expected_datao=0, payload_size=4096 00:19:58.392 [2024-09-27 13:25:00.088328] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088335] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088340] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088349] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.392 [2024-09-27 13:25:00.088355] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.392 [2024-09-27 13:25:00.088359] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088364] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9e40) on tqpair=0x1865750 00:19:58.392 [2024-09-27 13:25:00.088377] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:58.392 [2024-09-27 13:25:00.088388] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:58.392 [2024-09-27 13:25:00.088400] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:58.392 [2024-09-27 13:25:00.088407] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:58.392 [2024-09-27 13:25:00.088412] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:58.392 [2024-09-27 13:25:00.088418] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:58.392 [2024-09-27 13:25:00.088424] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:58.392 [2024-09-27 13:25:00.088429] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:58.392 [2024-09-27 13:25:00.088435] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:58.392 [2024-09-27 13:25:00.088452] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088457] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1865750) 00:19:58.392 [2024-09-27 13:25:00.088465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.392 [2024-09-27 13:25:00.088473] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088477] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088481] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1865750) 00:19:58.392 [2024-09-27 13:25:00.088488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.392 [2024-09-27 13:25:00.088513] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9e40, cid 4, qid 0 00:19:58.392 [2024-09-27 13:25:00.088521] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9fc0, cid 5, qid 0 00:19:58.392 [2024-09-27 13:25:00.088600] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.392 [2024-09-27 13:25:00.088607] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.392 [2024-09-27 13:25:00.088611] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088616] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9e40) on tqpair=0x1865750 00:19:58.392 [2024-09-27 13:25:00.088623] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.392 [2024-09-27 13:25:00.088629] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.392 [2024-09-27 13:25:00.088633] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088638] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9fc0) on tqpair=0x1865750 00:19:58.392 [2024-09-27 13:25:00.088649] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088654] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1865750) 00:19:58.392 [2024-09-27 13:25:00.088661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.392 [2024-09-27 13:25:00.088693] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9fc0, cid 5, qid 0 00:19:58.392 [2024-09-27 13:25:00.088777] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.392 [2024-09-27 13:25:00.088784] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.392 [2024-09-27 13:25:00.088788] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088793] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9fc0) on tqpair=0x1865750 00:19:58.392 [2024-09-27 13:25:00.088804] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088809] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1865750) 00:19:58.392 [2024-09-27 13:25:00.088817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.392 [2024-09-27 13:25:00.088837] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9fc0, cid 5, qid 0 00:19:58.392 [2024-09-27 13:25:00.088917] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.392 [2024-09-27 13:25:00.088925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.392 [2024-09-27 13:25:00.088929] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088933] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9fc0) on tqpair=0x1865750 00:19:58.392 [2024-09-27 13:25:00.088944] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.088949] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1865750) 00:19:58.392 [2024-09-27 13:25:00.088956] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.392 [2024-09-27 13:25:00.088974] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9fc0, cid 5, qid 0 00:19:58.392 [2024-09-27 13:25:00.089031] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.392 [2024-09-27 13:25:00.089039] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.392 [2024-09-27 13:25:00.089043] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.089047] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9fc0) on tqpair=0x1865750 00:19:58.392 [2024-09-27 13:25:00.089067] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.089073] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1865750) 00:19:58.392 [2024-09-27 13:25:00.089081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.392 [2024-09-27 13:25:00.089089] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.089093] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1865750) 00:19:58.392 [2024-09-27 13:25:00.089100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.392 [2024-09-27 13:25:00.089109] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.392 [2024-09-27 13:25:00.089113] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1865750) 00:19:58.393 [2024-09-27 13:25:00.089120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.393 [2024-09-27 13:25:00.089128] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089132] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1865750) 00:19:58.393 [2024-09-27 13:25:00.089139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.393 [2024-09-27 13:25:00.089160] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9fc0, cid 5, qid 0 00:19:58.393 [2024-09-27 13:25:00.089168] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9e40, cid 4, qid 0 00:19:58.393 [2024-09-27 13:25:00.089173] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ca140, cid 6, qid 0 00:19:58.393 [2024-09-27 13:25:00.089178] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ca2c0, cid 7, qid 0 00:19:58.393 [2024-09-27 13:25:00.089345] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.393 [2024-09-27 13:25:00.089352] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.393 [2024-09-27 13:25:00.089356] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089360] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1865750): datao=0, datal=8192, cccid=5 00:19:58.393 [2024-09-27 13:25:00.089365] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18c9fc0) on tqpair(0x1865750): expected_datao=0, payload_size=8192 00:19:58.393 [2024-09-27 13:25:00.089370] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089388] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089393] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089399] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.393 [2024-09-27 13:25:00.089405] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.393 [2024-09-27 13:25:00.089409] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089413] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1865750): datao=0, datal=512, cccid=4 00:19:58.393 [2024-09-27 13:25:00.089418] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18c9e40) on tqpair(0x1865750): expected_datao=0, payload_size=512 00:19:58.393 [2024-09-27 13:25:00.089423] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089430] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089434] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089440] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.393 [2024-09-27 13:25:00.089446] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.393 [2024-09-27 13:25:00.089450] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089453] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1865750): datao=0, datal=512, cccid=6 00:19:58.393 [2024-09-27 13:25:00.089458] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ca140) on tqpair(0x1865750): expected_datao=0, payload_size=512 00:19:58.393 [2024-09-27 13:25:00.089463] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089470] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089474] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089480] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.393 [2024-09-27 13:25:00.089486] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.393 [2024-09-27 13:25:00.089490] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089494] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1865750): datao=0, datal=4096, cccid=7 00:19:58.393 [2024-09-27 13:25:00.089498] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ca2c0) on tqpair(0x1865750): expected_datao=0, payload_size=4096 00:19:58.393 [2024-09-27 13:25:00.089503] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089511] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089515] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089523] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.393 [2024-09-27 13:25:00.089530] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.393 [2024-09-27 13:25:00.089534] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089538] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9fc0) on tqpair=0x1865750 00:19:58.393 [2024-09-27 13:25:00.089554] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.393 [2024-09-27 13:25:00.089561] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.393 [2024-09-27 13:25:00.089565] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089569] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9e40) on tqpair=0x1865750 00:19:58.393 [2024-09-27 13:25:00.089582] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.393 [2024-09-27 13:25:00.089588] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.393 [2024-09-27 13:25:00.089593] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089597] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ca140) on tqpair=0x1865750 00:19:58.393 [2024-09-27 13:25:00.089605] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.393 [2024-09-27 13:25:00.089612] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.393 [2024-09-27 13:25:00.089616] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.393 [2024-09-27 13:25:00.089620] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ca2c0) on tqpair=0x1865750 00:19:58.393 ===================================================== 00:19:58.393 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:58.393 ===================================================== 00:19:58.393 Controller Capabilities/Features 00:19:58.393 ================================ 00:19:58.393 Vendor ID: 8086 00:19:58.393 Subsystem Vendor ID: 8086 00:19:58.393 Serial Number: SPDK00000000000001 00:19:58.393 Model Number: SPDK bdev Controller 00:19:58.393 Firmware Version: 25.01 00:19:58.393 Recommended Arb Burst: 6 00:19:58.393 IEEE OUI Identifier: e4 d2 5c 00:19:58.393 Multi-path I/O 00:19:58.393 May have multiple subsystem ports: Yes 00:19:58.393 May have multiple controllers: Yes 00:19:58.393 Associated with SR-IOV VF: No 00:19:58.393 Max Data Transfer Size: 131072 00:19:58.393 Max Number of Namespaces: 32 00:19:58.393 Max Number of I/O Queues: 127 00:19:58.393 NVMe Specification Version (VS): 1.3 00:19:58.393 NVMe Specification Version (Identify): 1.3 00:19:58.393 Maximum Queue Entries: 128 00:19:58.393 Contiguous Queues Required: Yes 00:19:58.393 Arbitration Mechanisms Supported 00:19:58.393 Weighted Round Robin: Not Supported 00:19:58.393 Vendor Specific: Not Supported 00:19:58.393 Reset Timeout: 15000 ms 00:19:58.393 Doorbell Stride: 4 bytes 00:19:58.393 NVM Subsystem Reset: Not Supported 00:19:58.393 Command Sets Supported 00:19:58.393 NVM Command Set: Supported 00:19:58.393 Boot Partition: Not Supported 00:19:58.393 Memory Page Size Minimum: 4096 bytes 00:19:58.393 Memory Page Size Maximum: 4096 bytes 00:19:58.393 Persistent Memory Region: Not Supported 00:19:58.393 Optional Asynchronous Events Supported 00:19:58.393 Namespace Attribute Notices: Supported 00:19:58.393 Firmware Activation Notices: Not Supported 00:19:58.393 ANA Change Notices: Not Supported 00:19:58.393 PLE Aggregate Log Change Notices: Not Supported 00:19:58.393 LBA Status Info Alert Notices: Not Supported 00:19:58.393 EGE Aggregate Log Change Notices: Not Supported 00:19:58.393 Normal NVM Subsystem Shutdown event: Not Supported 00:19:58.393 Zone Descriptor Change Notices: Not Supported 00:19:58.393 Discovery Log Change Notices: Not Supported 00:19:58.393 Controller Attributes 00:19:58.394 128-bit Host Identifier: Supported 00:19:58.394 Non-Operational Permissive Mode: Not Supported 00:19:58.394 NVM Sets: Not Supported 00:19:58.394 Read Recovery Levels: Not Supported 00:19:58.394 Endurance Groups: Not Supported 00:19:58.394 Predictable Latency Mode: Not Supported 00:19:58.394 Traffic Based Keep ALive: Not Supported 00:19:58.394 Namespace Granularity: Not Supported 00:19:58.394 SQ Associations: Not Supported 00:19:58.394 UUID List: Not Supported 00:19:58.394 Multi-Domain Subsystem: Not Supported 00:19:58.394 Fixed Capacity Management: Not Supported 00:19:58.394 Variable Capacity Management: Not Supported 00:19:58.394 Delete Endurance Group: Not Supported 00:19:58.394 Delete NVM Set: Not Supported 00:19:58.394 Extended LBA Formats Supported: Not Supported 00:19:58.394 Flexible Data Placement Supported: Not Supported 00:19:58.394 00:19:58.394 Controller Memory Buffer Support 00:19:58.394 ================================ 00:19:58.394 Supported: No 00:19:58.394 00:19:58.394 Persistent Memory Region Support 00:19:58.394 ================================ 00:19:58.394 Supported: No 00:19:58.394 00:19:58.394 Admin Command Set Attributes 00:19:58.394 ============================ 00:19:58.394 Security Send/Receive: Not Supported 00:19:58.394 Format NVM: Not Supported 00:19:58.394 Firmware Activate/Download: Not Supported 00:19:58.394 Namespace Management: Not Supported 00:19:58.394 Device Self-Test: Not Supported 00:19:58.394 Directives: Not Supported 00:19:58.394 NVMe-MI: Not Supported 00:19:58.394 Virtualization Management: Not Supported 00:19:58.394 Doorbell Buffer Config: Not Supported 00:19:58.394 Get LBA Status Capability: Not Supported 00:19:58.394 Command & Feature Lockdown Capability: Not Supported 00:19:58.394 Abort Command Limit: 4 00:19:58.394 Async Event Request Limit: 4 00:19:58.394 Number of Firmware Slots: N/A 00:19:58.394 Firmware Slot 1 Read-Only: N/A 00:19:58.394 Firmware Activation Without Reset: N/A 00:19:58.394 Multiple Update Detection Support: N/A 00:19:58.394 Firmware Update Granularity: No Information Provided 00:19:58.394 Per-Namespace SMART Log: No 00:19:58.394 Asymmetric Namespace Access Log Page: Not Supported 00:19:58.394 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:58.394 Command Effects Log Page: Supported 00:19:58.394 Get Log Page Extended Data: Supported 00:19:58.394 Telemetry Log Pages: Not Supported 00:19:58.394 Persistent Event Log Pages: Not Supported 00:19:58.394 Supported Log Pages Log Page: May Support 00:19:58.394 Commands Supported & Effects Log Page: Not Supported 00:19:58.394 Feature Identifiers & Effects Log Page:May Support 00:19:58.394 NVMe-MI Commands & Effects Log Page: May Support 00:19:58.394 Data Area 4 for Telemetry Log: Not Supported 00:19:58.394 Error Log Page Entries Supported: 128 00:19:58.394 Keep Alive: Supported 00:19:58.394 Keep Alive Granularity: 10000 ms 00:19:58.394 00:19:58.394 NVM Command Set Attributes 00:19:58.394 ========================== 00:19:58.394 Submission Queue Entry Size 00:19:58.394 Max: 64 00:19:58.394 Min: 64 00:19:58.394 Completion Queue Entry Size 00:19:58.394 Max: 16 00:19:58.394 Min: 16 00:19:58.394 Number of Namespaces: 32 00:19:58.394 Compare Command: Supported 00:19:58.394 Write Uncorrectable Command: Not Supported 00:19:58.394 Dataset Management Command: Supported 00:19:58.394 Write Zeroes Command: Supported 00:19:58.394 Set Features Save Field: Not Supported 00:19:58.394 Reservations: Supported 00:19:58.394 Timestamp: Not Supported 00:19:58.394 Copy: Supported 00:19:58.394 Volatile Write Cache: Present 00:19:58.394 Atomic Write Unit (Normal): 1 00:19:58.394 Atomic Write Unit (PFail): 1 00:19:58.394 Atomic Compare & Write Unit: 1 00:19:58.394 Fused Compare & Write: Supported 00:19:58.394 Scatter-Gather List 00:19:58.394 SGL Command Set: Supported 00:19:58.394 SGL Keyed: Supported 00:19:58.394 SGL Bit Bucket Descriptor: Not Supported 00:19:58.394 SGL Metadata Pointer: Not Supported 00:19:58.394 Oversized SGL: Not Supported 00:19:58.394 SGL Metadata Address: Not Supported 00:19:58.394 SGL Offset: Supported 00:19:58.394 Transport SGL Data Block: Not Supported 00:19:58.394 Replay Protected Memory Block: Not Supported 00:19:58.394 00:19:58.394 Firmware Slot Information 00:19:58.394 ========================= 00:19:58.394 Active slot: 1 00:19:58.394 Slot 1 Firmware Revision: 25.01 00:19:58.394 00:19:58.394 00:19:58.394 Commands Supported and Effects 00:19:58.394 ============================== 00:19:58.394 Admin Commands 00:19:58.394 -------------- 00:19:58.394 Get Log Page (02h): Supported 00:19:58.394 Identify (06h): Supported 00:19:58.394 Abort (08h): Supported 00:19:58.394 Set Features (09h): Supported 00:19:58.394 Get Features (0Ah): Supported 00:19:58.394 Asynchronous Event Request (0Ch): Supported 00:19:58.394 Keep Alive (18h): Supported 00:19:58.394 I/O Commands 00:19:58.394 ------------ 00:19:58.394 Flush (00h): Supported LBA-Change 00:19:58.394 Write (01h): Supported LBA-Change 00:19:58.394 Read (02h): Supported 00:19:58.394 Compare (05h): Supported 00:19:58.394 Write Zeroes (08h): Supported LBA-Change 00:19:58.394 Dataset Management (09h): Supported LBA-Change 00:19:58.394 Copy (19h): Supported LBA-Change 00:19:58.394 00:19:58.394 Error Log 00:19:58.394 ========= 00:19:58.394 00:19:58.394 Arbitration 00:19:58.394 =========== 00:19:58.394 Arbitration Burst: 1 00:19:58.394 00:19:58.394 Power Management 00:19:58.394 ================ 00:19:58.394 Number of Power States: 1 00:19:58.394 Current Power State: Power State #0 00:19:58.394 Power State #0: 00:19:58.394 Max Power: 0.00 W 00:19:58.394 Non-Operational State: Operational 00:19:58.394 Entry Latency: Not Reported 00:19:58.394 Exit Latency: Not Reported 00:19:58.394 Relative Read Throughput: 0 00:19:58.394 Relative Read Latency: 0 00:19:58.394 Relative Write Throughput: 0 00:19:58.394 Relative Write Latency: 0 00:19:58.394 Idle Power: Not Reported 00:19:58.394 Active Power: Not Reported 00:19:58.394 Non-Operational Permissive Mode: Not Supported 00:19:58.394 00:19:58.394 Health Information 00:19:58.394 ================== 00:19:58.394 Critical Warnings: 00:19:58.394 Available Spare Space: OK 00:19:58.394 Temperature: OK 00:19:58.394 Device Reliability: OK 00:19:58.394 Read Only: No 00:19:58.394 Volatile Memory Backup: OK 00:19:58.394 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:58.394 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:58.394 Available Spare: 0% 00:19:58.394 Available Spare Threshold: 0% 00:19:58.394 Life Percentage Used:[2024-09-27 13:25:00.093762] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.394 [2024-09-27 13:25:00.093777] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1865750) 00:19:58.394 [2024-09-27 13:25:00.093790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.394 [2024-09-27 13:25:00.093826] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ca2c0, cid 7, qid 0 00:19:58.394 [2024-09-27 13:25:00.093901] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.394 [2024-09-27 13:25:00.093910] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.394 [2024-09-27 13:25:00.093915] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.394 [2024-09-27 13:25:00.093920] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ca2c0) on tqpair=0x1865750 00:19:58.394 [2024-09-27 13:25:00.093965] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:58.394 [2024-09-27 13:25:00.093984] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9840) on tqpair=0x1865750 00:19:58.394 [2024-09-27 13:25:00.093994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.394 [2024-09-27 13:25:00.094000] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c99c0) on tqpair=0x1865750 00:19:58.395 [2024-09-27 13:25:00.094006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.395 [2024-09-27 13:25:00.094011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9b40) on tqpair=0x1865750 00:19:58.395 [2024-09-27 13:25:00.094017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.395 [2024-09-27 13:25:00.094022] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.395 [2024-09-27 13:25:00.094027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.395 [2024-09-27 13:25:00.094039] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094044] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094048] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.395 [2024-09-27 13:25:00.094057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.395 [2024-09-27 13:25:00.094082] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.395 [2024-09-27 13:25:00.094138] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.395 [2024-09-27 13:25:00.094146] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.395 [2024-09-27 13:25:00.094150] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094154] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.395 [2024-09-27 13:25:00.094163] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094167] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094172] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.395 [2024-09-27 13:25:00.094180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.395 [2024-09-27 13:25:00.094202] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.395 [2024-09-27 13:25:00.094284] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.395 [2024-09-27 13:25:00.094291] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.395 [2024-09-27 13:25:00.094295] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094299] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.395 [2024-09-27 13:25:00.094305] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:58.395 [2024-09-27 13:25:00.094310] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:58.395 [2024-09-27 13:25:00.094321] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094326] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094330] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.395 [2024-09-27 13:25:00.094338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.395 [2024-09-27 13:25:00.094356] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.395 [2024-09-27 13:25:00.094414] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.395 [2024-09-27 13:25:00.094421] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.395 [2024-09-27 13:25:00.094425] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094429] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.395 [2024-09-27 13:25:00.094441] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094446] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094450] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.395 [2024-09-27 13:25:00.094458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.395 [2024-09-27 13:25:00.094476] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.395 [2024-09-27 13:25:00.094527] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.395 [2024-09-27 13:25:00.094535] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.395 [2024-09-27 13:25:00.094539] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094543] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.395 [2024-09-27 13:25:00.094555] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094560] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094564] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.395 [2024-09-27 13:25:00.094571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.395 [2024-09-27 13:25:00.094589] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.395 [2024-09-27 13:25:00.094648] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.395 [2024-09-27 13:25:00.094664] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.395 [2024-09-27 13:25:00.094669] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094674] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.395 [2024-09-27 13:25:00.094699] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094706] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094710] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.395 [2024-09-27 13:25:00.094718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.395 [2024-09-27 13:25:00.094740] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.395 [2024-09-27 13:25:00.094798] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.395 [2024-09-27 13:25:00.094806] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.395 [2024-09-27 13:25:00.094811] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094815] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.395 [2024-09-27 13:25:00.094826] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094832] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094836] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.395 [2024-09-27 13:25:00.094844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.395 [2024-09-27 13:25:00.094863] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.395 [2024-09-27 13:25:00.094920] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.395 [2024-09-27 13:25:00.094932] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.395 [2024-09-27 13:25:00.094937] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.395 [2024-09-27 13:25:00.094941] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.396 [2024-09-27 13:25:00.094953] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.094958] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.094962] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.396 [2024-09-27 13:25:00.094970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.396 [2024-09-27 13:25:00.094988] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.396 [2024-09-27 13:25:00.095058] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.396 [2024-09-27 13:25:00.095066] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.396 [2024-09-27 13:25:00.095070] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095075] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.396 [2024-09-27 13:25:00.095086] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095091] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095095] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.396 [2024-09-27 13:25:00.095103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.396 [2024-09-27 13:25:00.095122] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.396 [2024-09-27 13:25:00.095173] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.396 [2024-09-27 13:25:00.095185] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.396 [2024-09-27 13:25:00.095190] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095194] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.396 [2024-09-27 13:25:00.095206] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095211] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095215] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.396 [2024-09-27 13:25:00.095223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.396 [2024-09-27 13:25:00.095241] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.396 [2024-09-27 13:25:00.095298] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.396 [2024-09-27 13:25:00.095305] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.396 [2024-09-27 13:25:00.095309] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095314] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.396 [2024-09-27 13:25:00.095324] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095330] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095334] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.396 [2024-09-27 13:25:00.095341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.396 [2024-09-27 13:25:00.095359] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.396 [2024-09-27 13:25:00.095411] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.396 [2024-09-27 13:25:00.095418] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.396 [2024-09-27 13:25:00.095422] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095426] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.396 [2024-09-27 13:25:00.095437] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095442] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095447] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.396 [2024-09-27 13:25:00.095454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.396 [2024-09-27 13:25:00.095472] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.396 [2024-09-27 13:25:00.095529] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.396 [2024-09-27 13:25:00.095536] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.396 [2024-09-27 13:25:00.095540] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095545] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.396 [2024-09-27 13:25:00.095555] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095560] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095564] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.396 [2024-09-27 13:25:00.095572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.396 [2024-09-27 13:25:00.095590] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.396 [2024-09-27 13:25:00.095649] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.396 [2024-09-27 13:25:00.095657] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.396 [2024-09-27 13:25:00.095661] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095665] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.396 [2024-09-27 13:25:00.095676] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095696] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095701] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.396 [2024-09-27 13:25:00.095710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.396 [2024-09-27 13:25:00.095731] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.396 [2024-09-27 13:25:00.095789] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.396 [2024-09-27 13:25:00.095796] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.396 [2024-09-27 13:25:00.095800] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095805] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.396 [2024-09-27 13:25:00.095816] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095821] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095825] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.396 [2024-09-27 13:25:00.095833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.396 [2024-09-27 13:25:00.095851] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.396 [2024-09-27 13:25:00.095904] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.396 [2024-09-27 13:25:00.095911] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.396 [2024-09-27 13:25:00.095915] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095920] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.396 [2024-09-27 13:25:00.095931] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095936] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.095940] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.396 [2024-09-27 13:25:00.095948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.396 [2024-09-27 13:25:00.095966] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.396 [2024-09-27 13:25:00.096016] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.396 [2024-09-27 13:25:00.096024] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.396 [2024-09-27 13:25:00.096028] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.096032] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.396 [2024-09-27 13:25:00.096043] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.096048] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.396 [2024-09-27 13:25:00.096052] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.396 [2024-09-27 13:25:00.096060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.396 [2024-09-27 13:25:00.096078] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.396 [2024-09-27 13:25:00.096131] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.396 [2024-09-27 13:25:00.096139] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.396 [2024-09-27 13:25:00.096143] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096147] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.397 [2024-09-27 13:25:00.096158] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096163] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096167] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.397 [2024-09-27 13:25:00.096175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.397 [2024-09-27 13:25:00.096192] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.397 [2024-09-27 13:25:00.096246] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.397 [2024-09-27 13:25:00.096265] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.397 [2024-09-27 13:25:00.096270] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096274] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.397 [2024-09-27 13:25:00.096286] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096291] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096295] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.397 [2024-09-27 13:25:00.096303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.397 [2024-09-27 13:25:00.096322] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.397 [2024-09-27 13:25:00.096379] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.397 [2024-09-27 13:25:00.096391] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.397 [2024-09-27 13:25:00.096395] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096400] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.397 [2024-09-27 13:25:00.096411] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096416] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096420] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.397 [2024-09-27 13:25:00.096428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.397 [2024-09-27 13:25:00.096447] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.397 [2024-09-27 13:25:00.096504] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.397 [2024-09-27 13:25:00.096511] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.397 [2024-09-27 13:25:00.096515] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096519] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.397 [2024-09-27 13:25:00.096530] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096535] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096539] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.397 [2024-09-27 13:25:00.096547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.397 [2024-09-27 13:25:00.096565] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.397 [2024-09-27 13:25:00.096618] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.397 [2024-09-27 13:25:00.096625] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.397 [2024-09-27 13:25:00.096629] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096633] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.397 [2024-09-27 13:25:00.096644] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096649] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096653] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.397 [2024-09-27 13:25:00.096661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.397 [2024-09-27 13:25:00.096690] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.397 [2024-09-27 13:25:00.096749] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.397 [2024-09-27 13:25:00.096756] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.397 [2024-09-27 13:25:00.096761] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096765] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.397 [2024-09-27 13:25:00.096776] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096782] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096786] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.397 [2024-09-27 13:25:00.096794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.397 [2024-09-27 13:25:00.096814] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.397 [2024-09-27 13:25:00.096873] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.397 [2024-09-27 13:25:00.096881] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.397 [2024-09-27 13:25:00.096885] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096889] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.397 [2024-09-27 13:25:00.096900] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096905] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.096909] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.397 [2024-09-27 13:25:00.096917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.397 [2024-09-27 13:25:00.096936] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.397 [2024-09-27 13:25:00.096989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.397 [2024-09-27 13:25:00.096996] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.397 [2024-09-27 13:25:00.097000] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.097005] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.397 [2024-09-27 13:25:00.097016] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.097021] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.097025] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.397 [2024-09-27 13:25:00.097032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.397 [2024-09-27 13:25:00.097050] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.397 [2024-09-27 13:25:00.097107] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.397 [2024-09-27 13:25:00.097114] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.397 [2024-09-27 13:25:00.097118] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.097122] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.397 [2024-09-27 13:25:00.097133] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.097138] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.097142] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.397 [2024-09-27 13:25:00.097150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.397 [2024-09-27 13:25:00.097168] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.397 [2024-09-27 13:25:00.097224] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.397 [2024-09-27 13:25:00.097236] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.397 [2024-09-27 13:25:00.097241] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.097245] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.397 [2024-09-27 13:25:00.097257] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.097262] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.397 [2024-09-27 13:25:00.097266] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.397 [2024-09-27 13:25:00.097274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.398 [2024-09-27 13:25:00.097292] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.398 [2024-09-27 13:25:00.097347] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.398 [2024-09-27 13:25:00.097359] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.398 [2024-09-27 13:25:00.097363] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.398 [2024-09-27 13:25:00.097368] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.398 [2024-09-27 13:25:00.097379] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.398 [2024-09-27 13:25:00.097384] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.398 [2024-09-27 13:25:00.097388] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.398 [2024-09-27 13:25:00.097396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.398 [2024-09-27 13:25:00.097415] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.398 [2024-09-27 13:25:00.097469] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.398 [2024-09-27 13:25:00.097481] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.398 [2024-09-27 13:25:00.097486] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.398 [2024-09-27 13:25:00.097490] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.398 [2024-09-27 13:25:00.097502] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.398 [2024-09-27 13:25:00.097507] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.398 [2024-09-27 13:25:00.097511] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.398 [2024-09-27 13:25:00.097519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.398 [2024-09-27 13:25:00.097537] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.398 [2024-09-27 13:25:00.097598] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.398 [2024-09-27 13:25:00.097605] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.398 [2024-09-27 13:25:00.097609] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.398 [2024-09-27 13:25:00.097614] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.398 [2024-09-27 13:25:00.097625] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.398 [2024-09-27 13:25:00.097630] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.398 [2024-09-27 13:25:00.097634] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.398 [2024-09-27 13:25:00.097642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.398 [2024-09-27 13:25:00.097659] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.398 [2024-09-27 13:25:00.101703] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.398 [2024-09-27 13:25:00.101726] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.398 [2024-09-27 13:25:00.101732] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.398 [2024-09-27 13:25:00.101737] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.398 [2024-09-27 13:25:00.101752] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.398 [2024-09-27 13:25:00.101757] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.398 [2024-09-27 13:25:00.101761] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1865750) 00:19:58.398 [2024-09-27 13:25:00.101771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.398 [2024-09-27 13:25:00.101798] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c9cc0, cid 3, qid 0 00:19:58.398 [2024-09-27 13:25:00.101864] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.398 [2024-09-27 13:25:00.101872] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.398 [2024-09-27 13:25:00.101876] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.398 [2024-09-27 13:25:00.101880] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c9cc0) on tqpair=0x1865750 00:19:58.398 [2024-09-27 13:25:00.101889] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:19:58.398 0% 00:19:58.398 Data Units Read: 0 00:19:58.398 Data Units Written: 0 00:19:58.398 Host Read Commands: 0 00:19:58.398 Host Write Commands: 0 00:19:58.398 Controller Busy Time: 0 minutes 00:19:58.398 Power Cycles: 0 00:19:58.398 Power On Hours: 0 hours 00:19:58.398 Unsafe Shutdowns: 0 00:19:58.398 Unrecoverable Media Errors: 0 00:19:58.398 Lifetime Error Log Entries: 0 00:19:58.398 Warning Temperature Time: 0 minutes 00:19:58.398 Critical Temperature Time: 0 minutes 00:19:58.398 00:19:58.398 Number of Queues 00:19:58.398 ================ 00:19:58.398 Number of I/O Submission Queues: 127 00:19:58.398 Number of I/O Completion Queues: 127 00:19:58.398 00:19:58.398 Active Namespaces 00:19:58.398 ================= 00:19:58.398 Namespace ID:1 00:19:58.398 Error Recovery Timeout: Unlimited 00:19:58.398 Command Set Identifier: NVM (00h) 00:19:58.398 Deallocate: Supported 00:19:58.398 Deallocated/Unwritten Error: Not Supported 00:19:58.398 Deallocated Read Value: Unknown 00:19:58.398 Deallocate in Write Zeroes: Not Supported 00:19:58.398 Deallocated Guard Field: 0xFFFF 00:19:58.398 Flush: Supported 00:19:58.398 Reservation: Supported 00:19:58.398 Namespace Sharing Capabilities: Multiple Controllers 00:19:58.398 Size (in LBAs): 131072 (0GiB) 00:19:58.398 Capacity (in LBAs): 131072 (0GiB) 00:19:58.398 Utilization (in LBAs): 131072 (0GiB) 00:19:58.398 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:58.398 EUI64: ABCDEF0123456789 00:19:58.398 UUID: 813688fc-15c2-4a2d-937a-4d49a107ef91 00:19:58.398 Thin Provisioning: Not Supported 00:19:58.398 Per-NS Atomic Units: Yes 00:19:58.398 Atomic Boundary Size (Normal): 0 00:19:58.398 Atomic Boundary Size (PFail): 0 00:19:58.398 Atomic Boundary Offset: 0 00:19:58.398 Maximum Single Source Range Length: 65535 00:19:58.398 Maximum Copy Length: 65535 00:19:58.398 Maximum Source Range Count: 1 00:19:58.398 NGUID/EUI64 Never Reused: No 00:19:58.398 Namespace Write Protected: No 00:19:58.398 Number of LBA Formats: 1 00:19:58.398 Current LBA Format: LBA Format #00 00:19:58.398 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:58.398 00:19:58.398 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:58.398 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:58.398 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.398 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.398 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.398 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:58.398 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:58.398 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@331 -- # nvmfcleanup 00:19:58.398 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:19:58.398 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:58.398 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:19:58.398 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:58.398 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:58.398 rmmod nvme_tcp 00:19:58.398 rmmod nvme_fabrics 00:19:58.398 rmmod nvme_keyring 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # '[' -n 73965 ']' 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@333 -- # killprocess 73965 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 73965 ']' 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 73965 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73965 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73965' 00:19:58.658 killing process with pid 73965 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 73965 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 73965 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # nvmf_fini 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@264 -- # local dev 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:19:58.658 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # continue 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # continue 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:58.917 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:19:58.918 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:19:58.918 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@284 -- # iptr 00:19:58.918 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@538 -- # iptables-save 00:19:58.918 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:19:58.918 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@538 -- # iptables-restore 00:19:58.918 00:19:58.918 real 0m2.873s 00:19:58.918 user 0m7.465s 00:19:58.918 sys 0m0.721s 00:19:58.918 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:58.918 ************************************ 00:19:58.918 END TEST nvmf_identify 00:19:58.918 ************************************ 00:19:58.918 13:25:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.918 13:25:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@21 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:58.918 13:25:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:58.918 13:25:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:58.918 13:25:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.918 ************************************ 00:19:58.918 START TEST nvmf_perf 00:19:58.918 ************************************ 00:19:58.918 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:59.177 * Looking for test storage... 00:19:59.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:59.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.177 --rc genhtml_branch_coverage=1 00:19:59.177 --rc genhtml_function_coverage=1 00:19:59.177 --rc genhtml_legend=1 00:19:59.177 --rc geninfo_all_blocks=1 00:19:59.177 --rc geninfo_unexecuted_blocks=1 00:19:59.177 00:19:59.177 ' 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:59.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.177 --rc genhtml_branch_coverage=1 00:19:59.177 --rc genhtml_function_coverage=1 00:19:59.177 --rc genhtml_legend=1 00:19:59.177 --rc geninfo_all_blocks=1 00:19:59.177 --rc geninfo_unexecuted_blocks=1 00:19:59.177 00:19:59.177 ' 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:59.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.177 --rc genhtml_branch_coverage=1 00:19:59.177 --rc genhtml_function_coverage=1 00:19:59.177 --rc genhtml_legend=1 00:19:59.177 --rc geninfo_all_blocks=1 00:19:59.177 --rc geninfo_unexecuted_blocks=1 00:19:59.177 00:19:59.177 ' 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:59.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.177 --rc genhtml_branch_coverage=1 00:19:59.177 --rc genhtml_function_coverage=1 00:19:59.177 --rc genhtml_legend=1 00:19:59.177 --rc geninfo_all_blocks=1 00:19:59.177 --rc geninfo_unexecuted_blocks=1 00:19:59.177 00:19:59.177 ' 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:59.177 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:59.178 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # prepare_net_devs 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # local -g is_hw=no 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # remove_target_ns 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@276 -- # nvmf_veth_init 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@233 -- # create_target_ns 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@234 -- # create_main_bridge 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@114 -- # delete_main_bridge 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@130 -- # return 0 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@160 -- # set_up initiator0 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:19:59.178 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@160 -- # set_up target0 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set target0 up 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@161 -- # set_up target0_br 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_ns target0 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:59.179 13:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:19:59.179 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:19:59.179 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:59.179 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:59.179 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:19:59.179 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:19:59.179 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:59.179 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:19:59.179 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:59.179 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:59.179 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:59.179 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:19:59.179 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:19:59.438 10.0.0.1 00:19:59.438 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:59.438 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:59.438 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:59.438 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:59.438 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:19:59.438 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:59.439 10.0.0.2 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@75 -- # set_up initiator0 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@138 -- # set_up target0_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@160 -- # set_up initiator1 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@160 -- # set_up target1 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set target1 up 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@161 -- # set_up target1_br 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_ns target1 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772163 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:19:59.439 10.0.0.3 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:59.439 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772164 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:59.440 10.0.0.4 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@75 -- # set_up initiator1 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@138 -- # set_up target1_br 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo initiator0 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=initiator0 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:59.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:19:59.440 00:19:59.440 --- 10.0.0.1 ping statistics --- 00:19:59.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.440 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo target0 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=target0 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:59.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:19:59.440 00:19:59.440 --- 10.0.0.2 ping statistics --- 00:19:59.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.440 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:59.440 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo initiator1 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=initiator1 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:59.441 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:59.700 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:59.700 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:19:59.700 00:19:59.700 --- 10.0.0.3 ping statistics --- 00:19:59.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.700 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo target1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=target1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:19:59.700 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:59.700 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:19:59.700 00:19:59.700 --- 10.0.0.4 ping statistics --- 00:19:59.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.700 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # return 0 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo initiator0 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=initiator0 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo initiator1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=initiator1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo target0 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=target0 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo target1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=target1 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:19:59.700 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:59.701 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:59.701 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@324 -- # nvmfpid=74224 00:19:59.701 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:59.701 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # waitforlisten 74224 00:19:59.701 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 74224 ']' 00:19:59.701 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.701 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:59.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.701 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.701 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:59.701 13:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:59.701 [2024-09-27 13:25:01.454067] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:19:59.701 [2024-09-27 13:25:01.454203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.959 [2024-09-27 13:25:01.595372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:59.959 [2024-09-27 13:25:01.685222] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.959 [2024-09-27 13:25:01.685302] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.959 [2024-09-27 13:25:01.685324] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.959 [2024-09-27 13:25:01.685340] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.959 [2024-09-27 13:25:01.685354] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.959 [2024-09-27 13:25:01.685495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.959 [2024-09-27 13:25:01.685583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.959 [2024-09-27 13:25:01.686095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.959 [2024-09-27 13:25:01.686112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.959 [2024-09-27 13:25:01.720036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:00.960 13:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:00.960 13:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:20:00.960 13:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:20:00.960 13:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:00.960 13:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:00.960 13:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.960 13:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:00.960 13:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:01.218 13:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:01.218 13:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:01.476 13:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:01.476 13:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:01.735 13:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:01.735 13:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:01.735 13:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:01.735 13:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:01.735 13:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:01.994 [2024-09-27 13:25:03.824778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.252 13:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:02.511 13:25:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:02.511 13:25:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:02.770 13:25:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:02.770 13:25:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:03.028 13:25:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.287 [2024-09-27 13:25:05.050287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.287 13:25:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:03.545 13:25:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:03.545 13:25:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:03.545 13:25:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:03.545 13:25:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:04.919 Initializing NVMe Controllers 00:20:04.919 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:04.919 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:04.919 Initialization complete. Launching workers. 00:20:04.919 ======================================================== 00:20:04.919 Latency(us) 00:20:04.919 Device Information : IOPS MiB/s Average min max 00:20:04.919 PCIE (0000:00:10.0) NSID 1 from core 0: 24796.60 96.86 1290.56 304.27 5065.44 00:20:04.919 ======================================================== 00:20:04.919 Total : 24796.60 96.86 1290.56 304.27 5065.44 00:20:04.919 00:20:04.919 13:25:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:06.293 Initializing NVMe Controllers 00:20:06.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:06.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:06.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:06.293 Initialization complete. Launching workers. 00:20:06.293 ======================================================== 00:20:06.293 Latency(us) 00:20:06.293 Device Information : IOPS MiB/s Average min max 00:20:06.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3558.93 13.90 280.61 108.78 7206.42 00:20:06.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8047.10 5027.62 11991.89 00:20:06.293 ======================================================== 00:20:06.293 Total : 3683.92 14.39 544.13 108.78 11991.89 00:20:06.293 00:20:06.293 13:25:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:07.734 Initializing NVMe Controllers 00:20:07.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:07.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:07.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:07.734 Initialization complete. Launching workers. 00:20:07.734 ======================================================== 00:20:07.734 Latency(us) 00:20:07.734 Device Information : IOPS MiB/s Average min max 00:20:07.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8441.26 32.97 3791.28 712.94 7918.07 00:20:07.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3990.11 15.59 8032.30 6536.03 12813.15 00:20:07.734 ======================================================== 00:20:07.734 Total : 12431.37 48.56 5152.52 712.94 12813.15 00:20:07.734 00:20:07.734 13:25:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:07.734 13:25:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:10.264 Initializing NVMe Controllers 00:20:10.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.264 Controller IO queue size 128, less than required. 00:20:10.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:10.264 Controller IO queue size 128, less than required. 00:20:10.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:10.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:10.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:10.264 Initialization complete. Launching workers. 00:20:10.264 ======================================================== 00:20:10.264 Latency(us) 00:20:10.264 Device Information : IOPS MiB/s Average min max 00:20:10.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1703.30 425.82 76245.33 38578.65 116145.40 00:20:10.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 671.22 167.81 200407.12 69082.57 305899.83 00:20:10.264 ======================================================== 00:20:10.264 Total : 2374.52 593.63 111343.05 38578.65 305899.83 00:20:10.264 00:20:10.264 13:25:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:10.264 Initializing NVMe Controllers 00:20:10.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.264 Controller IO queue size 128, less than required. 00:20:10.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:10.264 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:10.264 Controller IO queue size 128, less than required. 00:20:10.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:10.264 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:10.264 WARNING: Some requested NVMe devices were skipped 00:20:10.264 No valid NVMe controllers or AIO or URING devices found 00:20:10.264 13:25:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:12.792 Initializing NVMe Controllers 00:20:12.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:12.792 Controller IO queue size 128, less than required. 00:20:12.792 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:12.792 Controller IO queue size 128, less than required. 00:20:12.792 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:12.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:12.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:12.792 Initialization complete. Launching workers. 00:20:12.792 00:20:12.792 ==================== 00:20:12.792 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:12.792 TCP transport: 00:20:12.792 polls: 9493 00:20:12.792 idle_polls: 5448 00:20:12.792 sock_completions: 4045 00:20:12.792 nvme_completions: 6621 00:20:12.792 submitted_requests: 9850 00:20:12.792 queued_requests: 1 00:20:12.792 00:20:12.792 ==================== 00:20:12.792 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:12.792 TCP transport: 00:20:12.792 polls: 11763 00:20:12.792 idle_polls: 7243 00:20:12.792 sock_completions: 4520 00:20:12.792 nvme_completions: 6833 00:20:12.792 submitted_requests: 10290 00:20:12.792 queued_requests: 1 00:20:12.792 ======================================================== 00:20:12.792 Latency(us) 00:20:12.793 Device Information : IOPS MiB/s Average min max 00:20:12.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1654.26 413.57 79353.66 43420.16 126300.71 00:20:12.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1707.24 426.81 75748.64 32244.48 124302.30 00:20:12.793 ======================================================== 00:20:12.793 Total : 3361.51 840.38 77522.74 32244.48 126300.71 00:20:12.793 00:20:12.793 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:12.793 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:13.051 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:13.051 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:13.051 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:13.051 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # nvmfcleanup 00:20:13.051 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:20:13.051 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:13.051 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:20:13.051 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:13.051 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:13.051 rmmod nvme_tcp 00:20:13.052 rmmod nvme_fabrics 00:20:13.052 rmmod nvme_keyring 00:20:13.052 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:13.052 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:20:13.052 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:20:13.052 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # '[' -n 74224 ']' 00:20:13.052 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@333 -- # killprocess 74224 00:20:13.052 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 74224 ']' 00:20:13.052 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 74224 00:20:13.310 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:20:13.310 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:13.310 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74224 00:20:13.310 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:13.310 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:13.310 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74224' 00:20:13.310 killing process with pid 74224 00:20:13.310 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 74224 00:20:13.310 13:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 74224 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # nvmf_fini 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@264 -- # local dev 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@267 -- # remove_target_ns 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:20:13.876 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # continue 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # continue 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@284 -- # iptr 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@538 -- # iptables-restore 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@538 -- # iptables-save 00:20:14.141 00:20:14.141 real 0m15.050s 00:20:14.141 user 0m55.204s 00:20:14.141 sys 0m4.025s 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:14.141 ************************************ 00:20:14.141 END TEST nvmf_perf 00:20:14.141 ************************************ 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.141 ************************************ 00:20:14.141 START TEST nvmf_fio_host 00:20:14.141 ************************************ 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:14.141 * Looking for test storage... 00:20:14.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:20:14.141 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:14.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.446 --rc genhtml_branch_coverage=1 00:20:14.446 --rc genhtml_function_coverage=1 00:20:14.446 --rc genhtml_legend=1 00:20:14.446 --rc geninfo_all_blocks=1 00:20:14.446 --rc geninfo_unexecuted_blocks=1 00:20:14.446 00:20:14.446 ' 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:14.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.446 --rc genhtml_branch_coverage=1 00:20:14.446 --rc genhtml_function_coverage=1 00:20:14.446 --rc genhtml_legend=1 00:20:14.446 --rc geninfo_all_blocks=1 00:20:14.446 --rc geninfo_unexecuted_blocks=1 00:20:14.446 00:20:14.446 ' 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:14.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.446 --rc genhtml_branch_coverage=1 00:20:14.446 --rc genhtml_function_coverage=1 00:20:14.446 --rc genhtml_legend=1 00:20:14.446 --rc geninfo_all_blocks=1 00:20:14.446 --rc geninfo_unexecuted_blocks=1 00:20:14.446 00:20:14.446 ' 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:14.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.446 --rc genhtml_branch_coverage=1 00:20:14.446 --rc genhtml_function_coverage=1 00:20:14.446 --rc genhtml_legend=1 00:20:14.446 --rc geninfo_all_blocks=1 00:20:14.446 --rc geninfo_unexecuted_blocks=1 00:20:14.446 00:20:14.446 ' 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.446 13:25:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:14.446 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:14.447 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # prepare_net_devs 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # local -g is_hw=no 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # remove_target_ns 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@276 -- # nvmf_veth_init 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@233 -- # create_target_ns 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@234 -- # create_main_bridge 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@114 -- # delete_main_bridge 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@130 -- # return 0 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:14.447 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@160 -- # set_up initiator0 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@160 -- # set_up target0 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set target0 up 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@161 -- # set_up target0_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_ns target0 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:20:14.448 10.0.0.1 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:14.448 10.0.0.2 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@75 -- # set_up initiator0 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@138 -- # set_up target0_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@160 -- # set_up initiator1 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:20:14.448 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@160 -- # set_up target1 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set target1 up 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@161 -- # set_up target1_br 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_ns target1 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772163 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:20:14.449 10.0.0.3 00:20:14.449 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772164 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:14.708 10.0.0.4 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@75 -- # set_up initiator1 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:14.708 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@138 -- # set_up target1_br 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo initiator0 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=initiator0 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:14.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:20:14.709 00:20:14.709 --- 10.0.0.1 ping statistics --- 00:20:14.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.709 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo target0 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=target0 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:14.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:20:14.709 00:20:14.709 --- 10.0.0.2 ping statistics --- 00:20:14.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.709 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo initiator1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=initiator1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:14.709 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:14.710 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:14.710 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:20:14.710 00:20:14.710 --- 10.0.0.3 ping statistics --- 00:20:14.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.710 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo target1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=target1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:20:14.710 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:14.710 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.113 ms 00:20:14.710 00:20:14.710 --- 10.0.0.4 ping statistics --- 00:20:14.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.710 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # return 0 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo initiator0 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=initiator0 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo initiator1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=initiator1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo target0 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=target0 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo target1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=target1 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:14.710 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:14.711 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.711 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74692 00:20:14.711 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:14.711 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74692 00:20:14.711 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:14.711 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 74692 ']' 00:20:14.711 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.711 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:14.711 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.711 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:14.711 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.970 [2024-09-27 13:25:16.604434] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:20:14.970 [2024-09-27 13:25:16.604569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.970 [2024-09-27 13:25:16.746359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.970 [2024-09-27 13:25:16.808896] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.970 [2024-09-27 13:25:16.808951] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.970 [2024-09-27 13:25:16.808964] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.970 [2024-09-27 13:25:16.808973] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.970 [2024-09-27 13:25:16.808980] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.970 [2024-09-27 13:25:16.809156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.970 [2024-09-27 13:25:16.809297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.970 [2024-09-27 13:25:16.809879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.970 [2024-09-27 13:25:16.809890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.228 [2024-09-27 13:25:16.852286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:15.228 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:15.228 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:20:15.228 13:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:15.487 [2024-09-27 13:25:17.267172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.487 13:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:15.487 13:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.487 13:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.487 13:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:16.052 Malloc1 00:20:16.052 13:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:16.310 13:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:16.568 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:16.825 [2024-09-27 13:25:18.519306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.825 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:17.083 13:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:17.341 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:17.341 fio-3.35 00:20:17.341 Starting 1 thread 00:20:19.869 00:20:19.869 test: (groupid=0, jobs=1): err= 0: pid=74773: Fri Sep 27 13:25:21 2024 00:20:19.869 read: IOPS=8731, BW=34.1MiB/s (35.8MB/s)(68.5MiB/2007msec) 00:20:19.869 slat (usec): min=2, max=320, avg= 2.54, stdev= 3.11 00:20:19.869 clat (usec): min=2524, max=13127, avg=7625.55, stdev=526.35 00:20:19.869 lat (usec): min=2566, max=13129, avg=7628.09, stdev=526.00 00:20:19.869 clat percentiles (usec): 00:20:19.869 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7046], 20.00th=[ 7242], 00:20:19.869 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:20:19.869 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8225], 95.00th=[ 8455], 00:20:19.869 | 99.00th=[ 8848], 99.50th=[ 9110], 99.90th=[11863], 99.95th=[12387], 00:20:19.869 | 99.99th=[13042] 00:20:19.869 bw ( KiB/s): min=34072, max=35280, per=100.00%, avg=34928.00, stdev=574.70, samples=4 00:20:19.869 iops : min= 8518, max= 8820, avg=8732.00, stdev=143.68, samples=4 00:20:19.869 write: IOPS=8730, BW=34.1MiB/s (35.8MB/s)(68.4MiB/2007msec); 0 zone resets 00:20:19.869 slat (usec): min=2, max=228, avg= 2.66, stdev= 1.99 00:20:19.869 clat (usec): min=2371, max=12868, avg=6958.22, stdev=489.94 00:20:19.869 lat (usec): min=2385, max=12871, avg=6960.88, stdev=489.74 00:20:19.869 clat percentiles (usec): 00:20:19.869 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6652], 00:20:19.869 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7046], 00:20:19.869 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7635], 00:20:19.869 | 99.00th=[ 8029], 99.50th=[ 8356], 99.90th=[11207], 99.95th=[12125], 00:20:19.869 | 99.99th=[12780] 00:20:19.869 bw ( KiB/s): min=34248, max=35392, per=99.97%, avg=34914.00, stdev=482.32, samples=4 00:20:19.869 iops : min= 8562, max= 8848, avg=8728.50, stdev=120.58, samples=4 00:20:19.869 lat (msec) : 4=0.09%, 10=99.72%, 20=0.19% 00:20:19.869 cpu : usr=71.83%, sys=21.04%, ctx=17, majf=0, minf=6 00:20:19.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:19.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:19.869 issued rwts: total=17525,17523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:19.869 00:20:19.869 Run status group 0 (all jobs): 00:20:19.869 READ: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.8MB), run=2007-2007msec 00:20:19.869 WRITE: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.4MiB (71.8MB), run=2007-2007msec 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:19.870 13:25:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:19.870 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:19.870 fio-3.35 00:20:19.870 Starting 1 thread 00:20:22.460 00:20:22.460 test: (groupid=0, jobs=1): err= 0: pid=74816: Fri Sep 27 13:25:23 2024 00:20:22.460 read: IOPS=7968, BW=125MiB/s (131MB/s)(250MiB/2009msec) 00:20:22.460 slat (usec): min=3, max=121, avg= 3.98, stdev= 1.99 00:20:22.460 clat (usec): min=2124, max=17178, avg=8909.05, stdev=2410.52 00:20:22.460 lat (usec): min=2128, max=17181, avg=8913.04, stdev=2410.58 00:20:22.460 clat percentiles (usec): 00:20:22.460 | 1.00th=[ 4293], 5.00th=[ 5211], 10.00th=[ 5800], 20.00th=[ 6718], 00:20:22.460 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9372], 00:20:22.460 | 70.00th=[10290], 80.00th=[10945], 90.00th=[12125], 95.00th=[13042], 00:20:22.460 | 99.00th=[15139], 99.50th=[15664], 99.90th=[16319], 99.95th=[16450], 00:20:22.460 | 99.99th=[16712] 00:20:22.460 bw ( KiB/s): min=60416, max=68992, per=51.20%, avg=65280.00, stdev=3936.56, samples=4 00:20:22.460 iops : min= 3776, max= 4312, avg=4080.00, stdev=246.04, samples=4 00:20:22.460 write: IOPS=4694, BW=73.3MiB/s (76.9MB/s)(133MiB/1815msec); 0 zone resets 00:20:22.460 slat (usec): min=34, max=209, avg=40.54, stdev= 6.70 00:20:22.460 clat (usec): min=2282, max=21567, avg=12615.77, stdev=2220.69 00:20:22.460 lat (usec): min=2320, max=21606, avg=12656.31, stdev=2221.54 00:20:22.460 clat percentiles (usec): 00:20:22.460 | 1.00th=[ 8356], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10683], 00:20:22.460 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12387], 60.00th=[12911], 00:20:22.460 | 70.00th=[13566], 80.00th=[14353], 90.00th=[15795], 95.00th=[16712], 00:20:22.460 | 99.00th=[18482], 99.50th=[18744], 99.90th=[20841], 99.95th=[21365], 00:20:22.460 | 99.99th=[21627] 00:20:22.460 bw ( KiB/s): min=61824, max=71904, per=90.44%, avg=67926.50, stdev=4668.37, samples=4 00:20:22.460 iops : min= 3864, max= 4494, avg=4245.25, stdev=291.83, samples=4 00:20:22.460 lat (msec) : 4=0.40%, 10=46.67%, 20=52.87%, 50=0.06% 00:20:22.460 cpu : usr=82.42%, sys=13.00%, ctx=58, majf=0, minf=5 00:20:22.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:22.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:22.460 issued rwts: total=16009,8520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:22.460 00:20:22.460 Run status group 0 (all jobs): 00:20:22.460 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=250MiB (262MB), run=2009-2009msec 00:20:22.460 WRITE: bw=73.3MiB/s (76.9MB/s), 73.3MiB/s-73.3MiB/s (76.9MB/s-76.9MB/s), io=133MiB (140MB), run=1815-1815msec 00:20:22.460 13:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@331 -- # nvmfcleanup 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:22.460 rmmod nvme_tcp 00:20:22.460 rmmod nvme_fabrics 00:20:22.460 rmmod nvme_keyring 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # '[' -n 74692 ']' 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@333 -- # killprocess 74692 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 74692 ']' 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 74692 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74692 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:22.460 killing process with pid 74692 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74692' 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 74692 00:20:22.460 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 74692 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # nvmf_fini 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@264 -- # local dev 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@267 -- # remove_target_ns 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@268 -- # delete_main_bridge 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:20:22.718 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # continue 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # continue 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@284 -- # iptr 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@538 -- # iptables-save 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@538 -- # iptables-restore 00:20:22.977 ************************************ 00:20:22.977 END TEST nvmf_fio_host 00:20:22.977 ************************************ 00:20:22.977 00:20:22.977 real 0m8.776s 00:20:22.977 user 0m35.010s 00:20:22.977 sys 0m2.414s 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.977 ************************************ 00:20:22.977 START TEST nvmf_failover 00:20:22.977 ************************************ 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:22.977 * Looking for test storage... 00:20:22.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.977 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:23.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.236 --rc genhtml_branch_coverage=1 00:20:23.236 --rc genhtml_function_coverage=1 00:20:23.236 --rc genhtml_legend=1 00:20:23.236 --rc geninfo_all_blocks=1 00:20:23.236 --rc geninfo_unexecuted_blocks=1 00:20:23.236 00:20:23.236 ' 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:23.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.236 --rc genhtml_branch_coverage=1 00:20:23.236 --rc genhtml_function_coverage=1 00:20:23.236 --rc genhtml_legend=1 00:20:23.236 --rc geninfo_all_blocks=1 00:20:23.236 --rc geninfo_unexecuted_blocks=1 00:20:23.236 00:20:23.236 ' 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:23.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.236 --rc genhtml_branch_coverage=1 00:20:23.236 --rc genhtml_function_coverage=1 00:20:23.236 --rc genhtml_legend=1 00:20:23.236 --rc geninfo_all_blocks=1 00:20:23.236 --rc geninfo_unexecuted_blocks=1 00:20:23.236 00:20:23.236 ' 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:23.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.236 --rc genhtml_branch_coverage=1 00:20:23.236 --rc genhtml_function_coverage=1 00:20:23.236 --rc genhtml_legend=1 00:20:23.236 --rc geninfo_all_blocks=1 00:20:23.236 --rc geninfo_unexecuted_blocks=1 00:20:23.236 00:20:23.236 ' 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.236 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:23.237 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # prepare_net_devs 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # local -g is_hw=no 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # remove_target_ns 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@276 -- # nvmf_veth_init 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@233 -- # create_target_ns 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@234 -- # create_main_bridge 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@114 -- # delete_main_bridge 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@130 -- # return 0 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@160 -- # set_up initiator0 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@160 -- # set_up target0 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set target0 up 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@161 -- # set_up target0_br 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_ns target0 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:20:23.237 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:20:23.238 10.0.0.1 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:23.238 10.0.0.2 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@75 -- # set_up initiator0 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:20:23.238 13:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@138 -- # set_up target0_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@160 -- # set_up initiator1 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@160 -- # set_up target1 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set target1 up 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@161 -- # set_up target1_br 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:20:23.238 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_ns target1 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772163 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:20:23.498 10.0.0.3 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772164 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:23.498 10.0.0.4 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@75 -- # set_up initiator1 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@138 -- # set_up target1_br 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo initiator0 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=initiator0 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:23.498 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:23.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:20:23.499 00:20:23.499 --- 10.0.0.1 ping statistics --- 00:20:23.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.499 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo target0 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=target0 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:23.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:20:23.499 00:20:23.499 --- 10.0.0.2 ping statistics --- 00:20:23.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.499 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo initiator1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=initiator1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:23.499 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:23.499 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:20:23.499 00:20:23.499 --- 10.0.0.3 ping statistics --- 00:20:23.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.499 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo target1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=target1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:20:23.499 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:23.499 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.136 ms 00:20:23.499 00:20:23.499 --- 10.0.0.4 ping statistics --- 00:20:23.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.499 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # return 0 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo initiator0 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=initiator0 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:20:23.499 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo initiator1 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=initiator1 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo target0 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=target0 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target1 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo target1 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=target1 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:20:23.500 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:20:23.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:23.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:20:23.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:23.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:23.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@324 -- # nvmfpid=75080 00:20:23.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # waitforlisten 75080 00:20:23.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75080 ']' 00:20:23.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:23.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.758 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.758 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.758 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:23.758 [2024-09-27 13:25:25.430439] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:20:23.758 [2024-09-27 13:25:25.430552] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.758 [2024-09-27 13:25:25.570855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:24.015 [2024-09-27 13:25:25.643261] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.015 [2024-09-27 13:25:25.643330] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.015 [2024-09-27 13:25:25.643344] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.015 [2024-09-27 13:25:25.643354] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.015 [2024-09-27 13:25:25.643362] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.016 [2024-09-27 13:25:25.643554] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.016 [2024-09-27 13:25:25.643756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.016 [2024-09-27 13:25:25.643891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.016 [2024-09-27 13:25:25.677075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:24.016 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.016 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:24.016 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:20:24.016 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:24.016 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:24.016 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.016 13:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:24.273 [2024-09-27 13:25:26.075459] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.273 13:25:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:24.839 Malloc0 00:20:24.839 13:25:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:24.839 13:25:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:25.098 13:25:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.357 [2024-09-27 13:25:27.174592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.357 13:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:25.923 [2024-09-27 13:25:27.466859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:25.923 13:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:25.923 [2024-09-27 13:25:27.739167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:25.923 13:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75130 00:20:25.923 13:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:25.923 13:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:25.923 13:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75130 /var/tmp/bdevperf.sock 00:20:25.923 13:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75130 ']' 00:20:25.923 13:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.923 13:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:25.923 13:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.923 13:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:25.923 13:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:26.490 13:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:26.490 13:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:26.490 13:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:26.748 NVMe0n1 00:20:26.748 13:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:27.006 00:20:27.006 13:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:27.006 13:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75146 00:20:27.006 13:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:28.381 13:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.381 13:25:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:31.665 13:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:31.665 00:20:31.665 13:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:32.230 13:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:35.516 13:25:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:35.516 [2024-09-27 13:25:37.105485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.516 13:25:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:36.451 13:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:36.709 13:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75146 00:20:43.296 { 00:20:43.296 "results": [ 00:20:43.296 { 00:20:43.296 "job": "NVMe0n1", 00:20:43.296 "core_mask": "0x1", 00:20:43.296 "workload": "verify", 00:20:43.296 "status": "finished", 00:20:43.296 "verify_range": { 00:20:43.296 "start": 0, 00:20:43.296 "length": 16384 00:20:43.296 }, 00:20:43.296 "queue_depth": 128, 00:20:43.296 "io_size": 4096, 00:20:43.296 "runtime": 15.011907, 00:20:43.296 "iops": 8568.53163292312, 00:20:43.296 "mibps": 33.470826691105934, 00:20:43.296 "io_failed": 3213, 00:20:43.296 "io_timeout": 0, 00:20:43.296 "avg_latency_us": 14539.291275284033, 00:20:43.296 "min_latency_us": 629.2945454545454, 00:20:43.296 "max_latency_us": 15728.64 00:20:43.296 } 00:20:43.296 ], 00:20:43.296 "core_count": 1 00:20:43.296 } 00:20:43.296 13:25:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75130 00:20:43.296 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75130 ']' 00:20:43.296 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75130 00:20:43.296 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:43.296 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:43.297 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75130 00:20:43.297 killing process with pid 75130 00:20:43.297 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:43.297 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:43.297 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75130' 00:20:43.297 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75130 00:20:43.297 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75130 00:20:43.297 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:43.297 [2024-09-27 13:25:27.814797] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:20:43.297 [2024-09-27 13:25:27.814916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75130 ] 00:20:43.297 [2024-09-27 13:25:27.952761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.297 [2024-09-27 13:25:28.021155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.297 [2024-09-27 13:25:28.054259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:43.297 Running I/O for 15 seconds... 00:20:43.297 6676.00 IOPS, 26.08 MiB/s [2024-09-27 13:25:30.090036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.297 [2024-09-27 13:25:30.090123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.090976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.090991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.091007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.091035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.091054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.091068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.091085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.091100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.091116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.091131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.091149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.091174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.091191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.091205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.091221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.091235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.297 [2024-09-27 13:25:30.091252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.297 [2024-09-27 13:25:30.091267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.091980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.091994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.298 [2024-09-27 13:25:30.092499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.298 [2024-09-27 13:25:30.092519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.092976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.092991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.299 [2024-09-27 13:25:30.093704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.299 [2024-09-27 13:25:30.093742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.299 [2024-09-27 13:25:30.093774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.299 [2024-09-27 13:25:30.093791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.299 [2024-09-27 13:25:30.093805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.093821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:30.093835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.093851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:30.093865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.093881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:30.093894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.093911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:30.093925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.093941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:30.093954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.093970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:30.093984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.094000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:30.094014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.094029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:30.094043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.094059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:30.094073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.094089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:30.094103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.094125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:30.094140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.094156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:30.094170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.094188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.300 [2024-09-27 13:25:30.094204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.094219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac8e0 is same with the state(6) to be set 00:20:43.300 [2024-09-27 13:25:30.094239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.300 [2024-09-27 13:25:30.094250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.300 [2024-09-27 13:25:30.094262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60568 len:8 PRP1 0x0 PRP2 0x0 00:20:43.300 [2024-09-27 13:25:30.094276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.094333] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8ac8e0 was disconnected and freed. reset controller. 00:20:43.300 [2024-09-27 13:25:30.094352] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:43.300 [2024-09-27 13:25:30.094433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.300 [2024-09-27 13:25:30.094456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.094472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.300 [2024-09-27 13:25:30.094486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.094501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.300 [2024-09-27 13:25:30.094514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.094529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.300 [2024-09-27 13:25:30.094543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:30.094557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:43.300 [2024-09-27 13:25:30.094612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83e480 (9): Bad file descriptor 00:20:43.300 [2024-09-27 13:25:30.098647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:43.300 [2024-09-27 13:25:30.136903] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:43.300 7449.50 IOPS, 29.10 MiB/s 7953.00 IOPS, 31.07 MiB/s 8198.75 IOPS, 32.03 MiB/s [2024-09-27 13:25:33.788030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.300 [2024-09-27 13:25:33.788100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.300 [2024-09-27 13:25:33.788189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.300 [2024-09-27 13:25:33.788219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.300 [2024-09-27 13:25:33.788247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.300 [2024-09-27 13:25:33.788668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.300 [2024-09-27 13:25:33.788682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.788700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.788727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.788746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.788760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.788776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.788789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.788805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.788819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.788835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.788849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.788865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.788879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.788894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.788908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.788924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.788938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.788961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.788976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.788992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.789006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.789036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.789080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.789108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.789137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.789166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.789195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.301 [2024-09-27 13:25:33.789659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.301 [2024-09-27 13:25:33.789674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.301 [2024-09-27 13:25:33.789705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.789734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.789757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.789773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.789787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.789803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.789817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.789833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.789847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.789863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.789877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.789893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.789906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.789922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.789936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.789951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.789965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.789981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.789995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.302 [2024-09-27 13:25:33.790149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.302 [2024-09-27 13:25:33.790179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.302 [2024-09-27 13:25:33.790209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.302 [2024-09-27 13:25:33.790238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.302 [2024-09-27 13:25:33.790268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.302 [2024-09-27 13:25:33.790297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.302 [2024-09-27 13:25:33.790327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.302 [2024-09-27 13:25:33.790356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.302 [2024-09-27 13:25:33.790858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.302 [2024-09-27 13:25:33.790888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.302 [2024-09-27 13:25:33.790924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.302 [2024-09-27 13:25:33.790954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.302 [2024-09-27 13:25:33.790970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.790984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.790999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.303 [2024-09-27 13:25:33.791392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.303 [2024-09-27 13:25:33.791422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.303 [2024-09-27 13:25:33.791451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.303 [2024-09-27 13:25:33.791481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.303 [2024-09-27 13:25:33.791511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.303 [2024-09-27 13:25:33.791540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.303 [2024-09-27 13:25:33.791570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.303 [2024-09-27 13:25:33.791601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.791975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.791991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.792005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.792021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.792034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.792050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.303 [2024-09-27 13:25:33.792065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.792090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0ae0 is same with the state(6) to be set 00:20:43.303 [2024-09-27 13:25:33.792109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.303 [2024-09-27 13:25:33.792120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.303 [2024-09-27 13:25:33.792131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67776 len:8 PRP1 0x0 PRP2 0x0 00:20:43.303 [2024-09-27 13:25:33.792144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.792191] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8b0ae0 was disconnected and freed. reset controller. 00:20:43.303 [2024-09-27 13:25:33.792210] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:20:43.303 [2024-09-27 13:25:33.792264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.303 [2024-09-27 13:25:33.792286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.303 [2024-09-27 13:25:33.792302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.304 [2024-09-27 13:25:33.792316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:33.792330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.304 [2024-09-27 13:25:33.792344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:33.792359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.304 [2024-09-27 13:25:33.792372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:33.792386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:43.304 [2024-09-27 13:25:33.796338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:43.304 [2024-09-27 13:25:33.796378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83e480 (9): Bad file descriptor 00:20:43.304 [2024-09-27 13:25:33.829675] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:43.304 8279.40 IOPS, 32.34 MiB/s 8406.83 IOPS, 32.84 MiB/s 8396.43 IOPS, 32.80 MiB/s 8430.88 IOPS, 32.93 MiB/s 8472.78 IOPS, 33.10 MiB/s [2024-09-27 13:25:38.386267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.386938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.304 [2024-09-27 13:25:38.386979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.386998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.304 [2024-09-27 13:25:38.387013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.304 [2024-09-27 13:25:38.387073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.304 [2024-09-27 13:25:38.387103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.304 [2024-09-27 13:25:38.387133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.304 [2024-09-27 13:25:38.387164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.304 [2024-09-27 13:25:38.387194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.304 [2024-09-27 13:25:38.387224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.387265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.387295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.387325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.387357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.387402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.387442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.387472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.387503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.387534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.387565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.304 [2024-09-27 13:25:38.387595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.304 [2024-09-27 13:25:38.387626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.387640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.387655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.387669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.387684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.387711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.387745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.387759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.387775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.387789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.387805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.387819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.387835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.387856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.387891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.387906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.387923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.387936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.387953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.387967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.387983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.387996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.388027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.388057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.388087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.388119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.388148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.388178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.388207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.388252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.388305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.305 [2024-09-27 13:25:38.388336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.305 [2024-09-27 13:25:38.388839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.305 [2024-09-27 13:25:38.388853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.388869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.388898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.388914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.388928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.388944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.388957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.388973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.388987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.306 [2024-09-27 13:25:38.389966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.389982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.389996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.390019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.390034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.390050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.390064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.390080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.390093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.390109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.390123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.390139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.390160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.306 [2024-09-27 13:25:38.390178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.306 [2024-09-27 13:25:38.390192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1500 is same with the state(6) to be set 00:20:43.307 [2024-09-27 13:25:38.390228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:808 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:816 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:824 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:840 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:848 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:856 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:872 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:880 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:888 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:904 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.390951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.390961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:912 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.390974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.390990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.391003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.391014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:920 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.391040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.391066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.307 [2024-09-27 13:25:38.391076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.307 [2024-09-27 13:25:38.391087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:8 PRP1 0x0 PRP2 0x0 00:20:43.307 [2024-09-27 13:25:38.391100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.391157] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8b1500 was disconnected and freed. reset controller. 00:20:43.307 [2024-09-27 13:25:38.391176] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:20:43.307 [2024-09-27 13:25:38.391253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.307 [2024-09-27 13:25:38.391276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.391292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.307 [2024-09-27 13:25:38.391306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.391320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.307 [2024-09-27 13:25:38.391334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.391361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.307 [2024-09-27 13:25:38.391376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.307 [2024-09-27 13:25:38.391390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:43.307 [2024-09-27 13:25:38.395462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:43.307 [2024-09-27 13:25:38.395510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83e480 (9): Bad file descriptor 00:20:43.307 [2024-09-27 13:25:38.435486] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:43.307 8459.00 IOPS, 33.04 MiB/s 8477.64 IOPS, 33.12 MiB/s 8521.17 IOPS, 33.29 MiB/s 8545.08 IOPS, 33.38 MiB/s 8557.00 IOPS, 33.43 MiB/s 8571.33 IOPS, 33.48 MiB/s 00:20:43.307 Latency(us) 00:20:43.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.307 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:43.307 Verification LBA range: start 0x0 length 0x4000 00:20:43.307 NVMe0n1 : 15.01 8568.53 33.47 214.03 0.00 14539.29 629.29 15728.64 00:20:43.307 =================================================================================================================== 00:20:43.307 Total : 8568.53 33.47 214.03 0.00 14539.29 629.29 15728.64 00:20:43.307 Received shutdown signal, test time was about 15.000000 seconds 00:20:43.307 00:20:43.307 Latency(us) 00:20:43.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.308 =================================================================================================================== 00:20:43.308 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75320 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75320 /var/tmp/bdevperf.sock 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75320 ']' 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:43.308 [2024-09-27 13:25:44.845781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:43.308 13:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:43.567 [2024-09-27 13:25:45.170053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:43.567 13:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:43.827 NVMe0n1 00:20:43.827 13:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:44.085 00:20:44.085 13:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:44.344 00:20:44.603 13:25:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:44.603 13:25:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:44.862 13:25:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:45.120 13:25:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:48.401 13:25:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:48.401 13:25:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:48.401 13:25:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75395 00:20:48.401 13:25:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75395 00:20:48.401 13:25:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:49.336 { 00:20:49.336 "results": [ 00:20:49.336 { 00:20:49.336 "job": "NVMe0n1", 00:20:49.336 "core_mask": "0x1", 00:20:49.336 "workload": "verify", 00:20:49.336 "status": "finished", 00:20:49.336 "verify_range": { 00:20:49.336 "start": 0, 00:20:49.336 "length": 16384 00:20:49.336 }, 00:20:49.336 "queue_depth": 128, 00:20:49.336 "io_size": 4096, 00:20:49.336 "runtime": 1.009196, 00:20:49.336 "iops": 7483.184634104772, 00:20:49.336 "mibps": 29.231189976971766, 00:20:49.336 "io_failed": 0, 00:20:49.336 "io_timeout": 0, 00:20:49.336 "avg_latency_us": 16993.050231124806, 00:20:49.336 "min_latency_us": 1385.1927272727273, 00:20:49.336 "max_latency_us": 15966.952727272726 00:20:49.336 } 00:20:49.336 ], 00:20:49.336 "core_count": 1 00:20:49.336 } 00:20:49.336 13:25:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:49.336 [2024-09-27 13:25:44.273990] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:20:49.336 [2024-09-27 13:25:44.274123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75320 ] 00:20:49.336 [2024-09-27 13:25:44.407784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.336 [2024-09-27 13:25:44.468756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.336 [2024-09-27 13:25:44.499920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:49.336 [2024-09-27 13:25:46.700579] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:49.336 [2024-09-27 13:25:46.700750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.336 [2024-09-27 13:25:46.700779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.336 [2024-09-27 13:25:46.700800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.336 [2024-09-27 13:25:46.700815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.336 [2024-09-27 13:25:46.700831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.337 [2024-09-27 13:25:46.700845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.337 [2024-09-27 13:25:46.700861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.337 [2024-09-27 13:25:46.700876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.337 [2024-09-27 13:25:46.700891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:49.337 [2024-09-27 13:25:46.700955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:49.337 [2024-09-27 13:25:46.700990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85e480 (9): Bad file descriptor 00:20:49.337 [2024-09-27 13:25:46.709033] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:49.337 Running I/O for 1 seconds... 00:20:49.337 7424.00 IOPS, 29.00 MiB/s 00:20:49.337 Latency(us) 00:20:49.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.337 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:49.337 Verification LBA range: start 0x0 length 0x4000 00:20:49.337 NVMe0n1 : 1.01 7483.18 29.23 0.00 0.00 16993.05 1385.19 15966.95 00:20:49.337 =================================================================================================================== 00:20:49.337 Total : 7483.18 29.23 0.00 0.00 16993.05 1385.19 15966.95 00:20:49.337 13:25:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:49.337 13:25:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:49.904 13:25:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:50.161 13:25:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:50.161 13:25:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:50.419 13:25:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:50.677 13:25:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:53.961 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:53.961 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:53.961 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75320 00:20:53.961 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75320 ']' 00:20:53.961 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75320 00:20:53.961 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:53.961 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:53.961 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75320 00:20:53.961 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:53.961 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:53.961 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75320' 00:20:53.961 killing process with pid 75320 00:20:53.961 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75320 00:20:53.961 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75320 00:20:54.220 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:54.220 13:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # nvmfcleanup 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:54.479 rmmod nvme_tcp 00:20:54.479 rmmod nvme_fabrics 00:20:54.479 rmmod nvme_keyring 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # '[' -n 75080 ']' 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@333 -- # killprocess 75080 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75080 ']' 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75080 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:54.479 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75080 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75080' 00:20:54.738 killing process with pid 75080 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75080 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75080 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # nvmf_fini 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@264 -- # local dev 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@267 -- # remove_target_ns 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@268 -- # delete_main_bridge 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:20:54.738 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # continue 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # continue 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@284 -- # iptr 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@538 -- # iptables-save 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@538 -- # iptables-restore 00:20:54.997 00:20:54.997 real 0m32.062s 00:20:54.997 user 2m3.965s 00:20:54.997 sys 0m5.569s 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:54.997 ************************************ 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:54.997 END TEST nvmf_failover 00:20:54.997 ************************************ 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.997 ************************************ 00:20:54.997 START TEST nvmf_host_multipath_status 00:20:54.997 ************************************ 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:54.997 * Looking for test storage... 00:20:54.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:54.997 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:54.998 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:54.998 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:55.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.257 --rc genhtml_branch_coverage=1 00:20:55.257 --rc genhtml_function_coverage=1 00:20:55.257 --rc genhtml_legend=1 00:20:55.257 --rc geninfo_all_blocks=1 00:20:55.257 --rc geninfo_unexecuted_blocks=1 00:20:55.257 00:20:55.257 ' 00:20:55.257 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:55.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.257 --rc genhtml_branch_coverage=1 00:20:55.257 --rc genhtml_function_coverage=1 00:20:55.257 --rc genhtml_legend=1 00:20:55.257 --rc geninfo_all_blocks=1 00:20:55.258 --rc geninfo_unexecuted_blocks=1 00:20:55.258 00:20:55.258 ' 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:55.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.258 --rc genhtml_branch_coverage=1 00:20:55.258 --rc genhtml_function_coverage=1 00:20:55.258 --rc genhtml_legend=1 00:20:55.258 --rc geninfo_all_blocks=1 00:20:55.258 --rc geninfo_unexecuted_blocks=1 00:20:55.258 00:20:55.258 ' 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:55.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.258 --rc genhtml_branch_coverage=1 00:20:55.258 --rc genhtml_function_coverage=1 00:20:55.258 --rc genhtml_legend=1 00:20:55.258 --rc geninfo_all_blocks=1 00:20:55.258 --rc geninfo_unexecuted_blocks=1 00:20:55.258 00:20:55.258 ' 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:55.258 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # prepare_net_devs 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # local -g is_hw=no 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # remove_target_ns 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:20:55.258 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@276 -- # nvmf_veth_init 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@233 -- # create_target_ns 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@234 -- # create_main_bridge 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@114 -- # delete_main_bridge 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@130 -- # return 0 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@160 -- # set_up initiator0 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:20:55.259 13:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@160 -- # set_up target0 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set target0 up 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@161 -- # set_up target0_br 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_ns target0 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:20:55.259 10.0.0.1 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.259 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:20:55.260 10.0.0.2 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@75 -- # set_up initiator0 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:20:55.260 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@138 -- # set_up target0_br 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@160 -- # set_up initiator1 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:20:55.520 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@160 -- # set_up target1 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set target1 up 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@161 -- # set_up target1_br 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_ns target1 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772163 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:20:55.521 10.0.0.3 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772164 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:20:55.521 10.0.0.4 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@75 -- # set_up initiator1 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@138 -- # set_up target1_br 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo initiator0 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=initiator0 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:55.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:20:55.521 00:20:55.521 --- 10.0.0.1 ping statistics --- 00:20:55.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.521 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:55.521 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo target0 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=target0 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:55.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:20:55.522 00:20:55.522 --- 10.0.0.2 ping statistics --- 00:20:55.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.522 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo initiator1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=initiator1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:55.522 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:55.522 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:20:55.522 00:20:55.522 --- 10.0.0.3 ping statistics --- 00:20:55.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.522 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo target1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=target1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:20:55.522 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:55.522 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:20:55.522 00:20:55.522 --- 10.0.0.4 ping statistics --- 00:20:55.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.522 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # return 0 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:55.522 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo initiator0 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=initiator0 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:55.791 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo initiator1 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=initiator1 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo target0 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=target0 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target1 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo target1 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=target1 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@324 -- # nvmfpid=75716 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # waitforlisten 75716 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 75716 ']' 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.792 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:55.792 [2024-09-27 13:25:57.508765] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:20:55.792 [2024-09-27 13:25:57.508861] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.085 [2024-09-27 13:25:57.648238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:56.085 [2024-09-27 13:25:57.733653] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.085 [2024-09-27 13:25:57.733725] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.085 [2024-09-27 13:25:57.733741] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.085 [2024-09-27 13:25:57.733751] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.085 [2024-09-27 13:25:57.733759] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.085 [2024-09-27 13:25:57.733917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.085 [2024-09-27 13:25:57.733929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.085 [2024-09-27 13:25:57.765804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:56.085 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:56.085 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:20:56.085 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:20:56.085 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:56.085 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:56.085 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.085 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=75716 00:20:56.085 13:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:56.344 [2024-09-27 13:25:58.163287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.344 13:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:56.603 Malloc0 00:20:56.862 13:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:57.121 13:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:57.380 13:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.639 [2024-09-27 13:25:59.231949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.639 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:57.898 [2024-09-27 13:25:59.584141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:57.898 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=75764 00:20:57.898 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:57.898 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.898 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 75764 /var/tmp/bdevperf.sock 00:20:57.898 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 75764 ']' 00:20:57.898 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.898 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:57.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.898 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.898 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:57.898 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:58.157 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:58.157 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:20:58.157 13:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:58.725 13:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:58.984 Nvme0n1 00:20:58.984 13:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:59.242 Nvme0n1 00:20:59.242 13:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:20:59.242 13:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:01.836 13:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:01.836 13:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:01.836 13:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:02.094 13:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:03.029 13:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:03.029 13:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:03.029 13:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.029 13:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:03.287 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:03.287 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:03.287 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.287 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:03.855 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:03.855 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:03.855 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.855 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:03.855 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:03.855 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:03.855 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.855 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:04.114 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.114 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:04.114 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.114 13:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:04.373 13:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.373 13:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:04.373 13:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.373 13:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:04.632 13:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.632 13:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:04.632 13:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:05.198 13:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:05.456 13:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:06.391 13:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:06.391 13:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:06.391 13:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.391 13:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:06.957 13:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:06.957 13:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:06.957 13:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.957 13:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:07.216 13:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.216 13:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:07.216 13:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.216 13:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:07.475 13:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.475 13:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:07.475 13:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:07.475 13:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.733 13:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.733 13:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:07.733 13:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.733 13:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:07.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:07.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:08.251 13:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:08.251 13:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:08.251 13:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:08.818 13:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:09.077 13:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:10.011 13:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:10.011 13:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:10.011 13:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:10.011 13:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.270 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.270 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:10.270 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.270 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:10.528 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:10.528 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:10.528 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.529 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:11.111 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.111 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:11.111 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:11.111 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:11.377 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.377 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:11.377 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:11.377 13:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:11.635 13:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.635 13:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:11.635 13:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:11.635 13:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:11.894 13:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.894 13:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:11.894 13:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:12.153 13:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:12.413 13:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:13.348 13:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:13.348 13:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:13.348 13:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.348 13:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:13.916 13:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.916 13:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:13.916 13:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:13.916 13:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.177 13:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:14.177 13:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:14.177 13:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:14.177 13:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.436 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.436 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:14.436 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.436 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:14.695 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.695 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:14.695 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.695 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:14.952 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.953 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:14.953 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.953 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:15.211 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:15.211 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:15.211 13:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:15.469 13:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:15.727 13:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:17.103 13:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:17.103 13:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:17.103 13:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.103 13:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:17.103 13:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:17.103 13:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:17.103 13:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.103 13:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:17.362 13:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:17.362 13:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:17.362 13:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.362 13:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:17.927 13:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.927 13:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:17.927 13:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.927 13:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:18.185 13:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.185 13:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:18.185 13:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.185 13:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:18.444 13:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:18.444 13:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:18.444 13:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.444 13:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:18.702 13:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:18.702 13:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:18.702 13:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:18.959 13:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:19.217 13:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:20.151 13:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:20.151 13:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:20.151 13:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.151 13:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:20.409 13:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:20.409 13:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:20.409 13:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:20.409 13:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.668 13:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.668 13:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:20.668 13:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:20.668 13:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.234 13:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:21.234 13:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:21.234 13:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.234 13:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:21.492 13:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:21.492 13:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:21.492 13:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:21.492 13:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.750 13:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:21.750 13:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:21.750 13:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.750 13:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:22.008 13:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.008 13:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:22.266 13:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:22.266 13:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:22.524 13:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:22.782 13:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:24.156 13:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:24.156 13:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:24.156 13:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.156 13:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:24.156 13:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:24.156 13:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:24.156 13:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.156 13:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:24.413 13:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:24.413 13:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:24.413 13:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.413 13:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:24.671 13:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:24.671 13:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:24.671 13:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.671 13:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:25.276 13:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.276 13:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:25.276 13:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.276 13:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:25.276 13:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.276 13:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:25.276 13:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:25.276 13:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.843 13:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.843 13:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:25.843 13:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:25.843 13:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:26.101 13:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:27.478 13:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:27.478 13:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:27.478 13:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.478 13:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:27.478 13:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:27.478 13:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:27.478 13:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.478 13:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:27.737 13:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.737 13:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:27.737 13:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:27.737 13:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.996 13:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.996 13:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:27.996 13:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.996 13:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:28.255 13:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.255 13:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:28.255 13:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.255 13:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:28.513 13:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.513 13:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:28.513 13:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:28.513 13:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.078 13:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.078 13:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:29.078 13:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:29.336 13:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:29.594 13:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:30.530 13:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:30.530 13:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:30.530 13:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.530 13:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:30.788 13:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.788 13:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:30.788 13:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:30.788 13:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.046 13:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.046 13:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:31.046 13:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.046 13:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:31.612 13:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.612 13:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:31.612 13:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.612 13:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:31.870 13:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.870 13:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:31.870 13:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.870 13:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:32.128 13:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.128 13:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:32.128 13:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.128 13:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:32.387 13:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.387 13:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:32.387 13:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:32.645 13:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:32.904 13:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:33.840 13:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:33.840 13:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:33.840 13:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.840 13:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:34.407 13:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.407 13:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:34.407 13:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.407 13:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:34.407 13:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:34.407 13:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:34.407 13:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:34.407 13:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.665 13:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.665 13:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:34.665 13:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:34.665 13:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.924 13:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.924 13:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:34.924 13:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.924 13:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:35.491 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.491 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:35.491 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.491 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 75764 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 75764 ']' 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 75764 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75764 00:21:35.751 killing process with pid 75764 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75764' 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 75764 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 75764 00:21:35.751 { 00:21:35.751 "results": [ 00:21:35.751 { 00:21:35.751 "job": "Nvme0n1", 00:21:35.751 "core_mask": "0x4", 00:21:35.751 "workload": "verify", 00:21:35.751 "status": "terminated", 00:21:35.751 "verify_range": { 00:21:35.751 "start": 0, 00:21:35.751 "length": 16384 00:21:35.751 }, 00:21:35.751 "queue_depth": 128, 00:21:35.751 "io_size": 4096, 00:21:35.751 "runtime": 36.216525, 00:21:35.751 "iops": 8144.928316562674, 00:21:35.751 "mibps": 31.816126236572945, 00:21:35.751 "io_failed": 0, 00:21:35.751 "io_timeout": 0, 00:21:35.751 "avg_latency_us": 15682.135356613106, 00:21:35.751 "min_latency_us": 338.8509090909091, 00:21:35.751 "max_latency_us": 4026531.84 00:21:35.751 } 00:21:35.751 ], 00:21:35.751 "core_count": 1 00:21:35.751 } 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 75764 00:21:35.751 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:35.751 [2024-09-27 13:25:59.662622] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:21:35.751 [2024-09-27 13:25:59.662745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75764 ] 00:21:35.751 [2024-09-27 13:25:59.799524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.751 [2024-09-27 13:25:59.857528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.751 [2024-09-27 13:25:59.886219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:35.751 [2024-09-27 13:26:01.029513] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:21:35.751 Running I/O for 90 seconds... 00:21:35.751 6677.00 IOPS, 26.08 MiB/s 6730.50 IOPS, 26.29 MiB/s 6748.33 IOPS, 26.36 MiB/s 6757.25 IOPS, 26.40 MiB/s 6762.60 IOPS, 26.42 MiB/s 6775.67 IOPS, 26.47 MiB/s 6759.57 IOPS, 26.40 MiB/s 7021.50 IOPS, 27.43 MiB/s 7240.44 IOPS, 28.28 MiB/s 7403.20 IOPS, 28.92 MiB/s 7546.09 IOPS, 29.48 MiB/s 7661.25 IOPS, 29.93 MiB/s 7759.92 IOPS, 30.31 MiB/s 7846.79 IOPS, 30.65 MiB/s 7929.00 IOPS, 30.97 MiB/s 8001.44 IOPS, 31.26 MiB/s [2024-09-27 13:26:17.211990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.751 [2024-09-27 13:26:17.212043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:35.751 [2024-09-27 13:26:17.212117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.751 [2024-09-27 13:26:17.212138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:35.751 [2024-09-27 13:26:17.212161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.751 [2024-09-27 13:26:17.212176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:35.751 [2024-09-27 13:26:17.212198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.751 [2024-09-27 13:26:17.212212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:35.751 [2024-09-27 13:26:17.212233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.212248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.212284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.212319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.212354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.212421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.212473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.212510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.212547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.212584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.212621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.212658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.212695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.212751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.212788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.212827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.212865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.212916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.212953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.212975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.212990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.213027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.213064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.213102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.213139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.213191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.213227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.213262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.213298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.213335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.213403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.213442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.213479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.213516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.213552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.213588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.213625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.752 [2024-09-27 13:26:17.213661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.213725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.213767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.752 [2024-09-27 13:26:17.213804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:35.752 [2024-09-27 13:26:17.213825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.213841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.213863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.213885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.213909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.213924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.213946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.213961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.213983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.213998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.753 [2024-09-27 13:26:17.214637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.753 [2024-09-27 13:26:17.214674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.753 [2024-09-27 13:26:17.214740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.753 [2024-09-27 13:26:17.214778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.753 [2024-09-27 13:26:17.214815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.753 [2024-09-27 13:26:17.214863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.753 [2024-09-27 13:26:17.214900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.753 [2024-09-27 13:26:17.214937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.214975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.214997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.215012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.215047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.215063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.215086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.215101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.215123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.215137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.215159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.215174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.215196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.215210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.215237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.215252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.215274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.215289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.215320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.753 [2024-09-27 13:26:17.215337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:35.753 [2024-09-27 13:26:17.215359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.754 [2024-09-27 13:26:17.215375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:35.754 [2024-09-27 13:26:17.215412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.754 [2024-09-27 13:26:17.215426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:35.754 [2024-09-27 13:26:17.215447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.754 [2024-09-27 13:26:17.215462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:35.754 [2024-09-27 13:26:17.215483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.754 [2024-09-27 13:26:17.215497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:35.754 [2024-09-27 13:26:17.215518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.754 [2024-09-27 13:26:17.215532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:35.754 [2024-09-27 13:26:17.215554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.754 [2024-09-27 13:26:17.215569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:35.754 [2024-09-27 13:26:17.215594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.754 [2024-09-27 13:26:17.215609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:35.754 [2024-09-27 13:26:17.215631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.754 [2024-09-27 13:26:17.215645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:35.754 [2024-09-27 13:26:17.215667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.754 [2024-09-27 13:26:17.215682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:35.754 [2024-09-27 13:26:17.215734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.754 [2024-09-27 13:26:17.215753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:35.754 [2024-09-27 13:26:17.215775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.754 [2024-09-27 13:26:17.215790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.215812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.017 [2024-09-27 13:26:17.215837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.215861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.017 [2024-09-27 13:26:17.215876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.215898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.017 [2024-09-27 13:26:17.215913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.215935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.215951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.215973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.215988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.216010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.216026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.216048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.216063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.216085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.216100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.216122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.216138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.216174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.216189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.216210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.216224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.216245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.216260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.216281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.216302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.216324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.216338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.216360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.216375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.216396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.216410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.216432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.216446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:36.017 [2024-09-27 13:26:17.216468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.017 [2024-09-27 13:26:17.216482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.018 [2024-09-27 13:26:17.217294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.217972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.217987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.218017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.218032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:17.218062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:17.218077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:36.018 7532.18 IOPS, 29.42 MiB/s 7113.72 IOPS, 27.79 MiB/s 6739.32 IOPS, 26.33 MiB/s 6402.35 IOPS, 25.01 MiB/s 6519.86 IOPS, 25.47 MiB/s 6635.64 IOPS, 25.92 MiB/s 6740.52 IOPS, 26.33 MiB/s 6967.75 IOPS, 27.22 MiB/s 7166.12 IOPS, 27.99 MiB/s 7337.46 IOPS, 28.66 MiB/s 7439.48 IOPS, 29.06 MiB/s 7497.79 IOPS, 29.29 MiB/s 7550.31 IOPS, 29.49 MiB/s 7599.17 IOPS, 29.68 MiB/s 7737.42 IOPS, 30.22 MiB/s 7878.69 IOPS, 30.78 MiB/s 8006.09 IOPS, 31.27 MiB/s [2024-09-27 13:26:34.648830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:34.648927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.648982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:34.649000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:34.649036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:34.649072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:34.649107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.018 [2024-09-27 13:26:34.649142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.018 [2024-09-27 13:26:34.649177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.018 [2024-09-27 13:26:34.649212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:34.649248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:34.649283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:34.649318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:34.649353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.018 [2024-09-27 13:26:34.649399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.018 [2024-09-27 13:26:34.649436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.018 [2024-09-27 13:26:34.649472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.018 [2024-09-27 13:26:34.649509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.018 [2024-09-27 13:26:34.649544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.018 [2024-09-27 13:26:34.649580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:34.649616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.018 [2024-09-27 13:26:34.649652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:36.018 [2024-09-27 13:26:34.649673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.649687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.649723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.649739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.649760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.649775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.649795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.649810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.649831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.649854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.649877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.649892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.649913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.649927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.649948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.649963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.649984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.649998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.650178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.650213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.650249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.650383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.650419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.650456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.650738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.650775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.650819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.650855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.650891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.650926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.650983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.650997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.651018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.019 [2024-09-27 13:26:34.651060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.651083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.651098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.651120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.651135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.651156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.019 [2024-09-27 13:26:34.651171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:36.019 [2024-09-27 13:26:34.651199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.651214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.651236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.651251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.651292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.651321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.651346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.651362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.651384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.651400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.651422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.651437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.651459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.651474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.651496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.651510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.651532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.651547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.651568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.651583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.651605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.651633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.651654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.651669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.651690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.651704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.651743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.651759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.653296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.653331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.653511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.653546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.653581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.653879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.653915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.653971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.020 [2024-09-27 13:26:34.653986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.654007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.654023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.654044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.654058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.654079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.020 [2024-09-27 13:26:34.654093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:36.020 [2024-09-27 13:26:34.654114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.654129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.654951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.654979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.655032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.655089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.655126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.655163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.655200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.655236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.655272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.655309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.655345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.655382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.655418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.655468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.655508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.655545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.655585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.655636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.655671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.655706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.655757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.655793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.655828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.655864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.655899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.655942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.655964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.655979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.656000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.656014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.656035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.656049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.656071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.656092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.656114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.656128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.656149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.656164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.656185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.656200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.656236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.021 [2024-09-27 13:26:34.656255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.656277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.021 [2024-09-27 13:26:34.656291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:36.021 [2024-09-27 13:26:34.656313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.656327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.656348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.656362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.656383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.656397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.656443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.656459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.656480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.656495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.656516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.656530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.656551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.656565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.656586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.656600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.656621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.656635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.656656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.656670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.656707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.656724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.656745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.656760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.656781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.656795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.656817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.656831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.658366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.658425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.658461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.658496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.658532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.658567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.658602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.658638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.658673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.658725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.658761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.658796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.658834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.658879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.658917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.658953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.658990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.659009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.659056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.659075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.659097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.022 [2024-09-27 13:26:34.659112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.659134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.022 [2024-09-27 13:26:34.659149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:36.022 [2024-09-27 13:26:34.659171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.659185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.659207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.659222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.659244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.659259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.659280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.659295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.659317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.659332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.659354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.659381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.659405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.659420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.659442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.659457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.659484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.659500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.659522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.659538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.660973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.661002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.661046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.661083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.661118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.661154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.661190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.661225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.661260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.661311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.661346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.661381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.661417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.661452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.661491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.661529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.661545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.663302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.663346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.663383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.663421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.663458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.663510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.663547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.663583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.663620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.663657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.023 [2024-09-27 13:26:34.663710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.663748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.663784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.023 [2024-09-27 13:26:34.663821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:36.023 [2024-09-27 13:26:34.663845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.663861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.663883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.663898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.663919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.663934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.663956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.663979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.664017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.664068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.664103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.664153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.664187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.664221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.664255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.664289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.664323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.664357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.664390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.664431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.664468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.664502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.664536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.664570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.664604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.664638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.664710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.664767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.664804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.664840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.664876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.664898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.664913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.666039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.666097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.666123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.666138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.666159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.666173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.666194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.666208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.666229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.666243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.666263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.666277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.666298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.666311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.666331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.666346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.666366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.666380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.666400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.666414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.666434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.024 [2024-09-27 13:26:34.666447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:36.024 [2024-09-27 13:26:34.666468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.024 [2024-09-27 13:26:34.666482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.666885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.666913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.666940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.666957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.666979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.666993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.667042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.667081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.667117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.667154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.667191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.667227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.667264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.667301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.667337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.667386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.667438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.667474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.667514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.667549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.667584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.667634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.667667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.667719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.667772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.667809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.667846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.667891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.667929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.667965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.667987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.668002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.668023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.668053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.668088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.668102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.668138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.025 [2024-09-27 13:26:34.668156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:36.025 [2024-09-27 13:26:34.669669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.025 [2024-09-27 13:26:34.669714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:36.025 8082.26 IOPS, 31.57 MiB/s 8113.51 IOPS, 31.69 MiB/s 8140.14 IOPS, 31.80 MiB/s Received shutdown signal, test time was about 36.217380 seconds 00:21:36.025 00:21:36.025 Latency(us) 00:21:36.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.025 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:36.025 Verification LBA range: start 0x0 length 0x4000 00:21:36.025 Nvme0n1 : 36.22 8144.93 31.82 0.00 0.00 15682.14 338.85 4026531.84 00:21:36.025 =================================================================================================================== 00:21:36.025 Total : 8144.93 31.82 0.00 0.00 15682.14 338.85 4026531.84 00:21:36.025 [2024-09-27 13:26:37.426858] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:21:36.025 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.025 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:36.025 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:36.025 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:36.025 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # nvmfcleanup 00:21:36.026 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:36.284 rmmod nvme_tcp 00:21:36.284 rmmod nvme_fabrics 00:21:36.284 rmmod nvme_keyring 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # '[' -n 75716 ']' 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@333 -- # killprocess 75716 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 75716 ']' 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 75716 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75716 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:36.284 killing process with pid 75716 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75716' 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 75716 00:21:36.284 13:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 75716 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # nvmf_fini 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@264 -- # local dev 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # continue 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # continue 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@284 -- # iptr 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@538 -- # iptables-save 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@538 -- # iptables-restore 00:21:36.543 00:21:36.543 real 0m41.606s 00:21:36.543 user 2m16.122s 00:21:36.543 sys 0m12.051s 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:36.543 13:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:36.543 ************************************ 00:21:36.543 END TEST nvmf_host_multipath_status 00:21:36.543 ************************************ 00:21:36.802 13:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:36.802 13:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.803 ************************************ 00:21:36.803 START TEST nvmf_discovery_remove_ifc 00:21:36.803 ************************************ 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:36.803 * Looking for test storage... 00:21:36.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:36.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.803 --rc genhtml_branch_coverage=1 00:21:36.803 --rc genhtml_function_coverage=1 00:21:36.803 --rc genhtml_legend=1 00:21:36.803 --rc geninfo_all_blocks=1 00:21:36.803 --rc geninfo_unexecuted_blocks=1 00:21:36.803 00:21:36.803 ' 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:36.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.803 --rc genhtml_branch_coverage=1 00:21:36.803 --rc genhtml_function_coverage=1 00:21:36.803 --rc genhtml_legend=1 00:21:36.803 --rc geninfo_all_blocks=1 00:21:36.803 --rc geninfo_unexecuted_blocks=1 00:21:36.803 00:21:36.803 ' 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:36.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.803 --rc genhtml_branch_coverage=1 00:21:36.803 --rc genhtml_function_coverage=1 00:21:36.803 --rc genhtml_legend=1 00:21:36.803 --rc geninfo_all_blocks=1 00:21:36.803 --rc geninfo_unexecuted_blocks=1 00:21:36.803 00:21:36.803 ' 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:36.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.803 --rc genhtml_branch_coverage=1 00:21:36.803 --rc genhtml_function_coverage=1 00:21:36.803 --rc genhtml_legend=1 00:21:36.803 --rc geninfo_all_blocks=1 00:21:36.803 --rc geninfo_unexecuted_blocks=1 00:21:36.803 00:21:36.803 ' 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:36.803 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:36.804 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # discovery_port=8009 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@18 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@21 -- # host_sock=/tmp/host.sock 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # nvmftestinit 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # prepare_net_devs 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # local -g is_hw=no 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # remove_target_ns 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@276 -- # nvmf_veth_init 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@233 -- # create_target_ns 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@234 -- # create_main_bridge 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@114 -- # delete_main_bridge 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # return 0 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:21:36.804 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # set_up initiator0 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # set_up target0 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set target0 up 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@161 -- # set_up target0_br 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_ns target0 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:21:37.063 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:21:37.064 10.0.0.1 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:37.064 10.0.0.2 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@75 -- # set_up initiator0 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@138 -- # set_up target0_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # set_up initiator1 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # set_up target1 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set target1 up 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@161 -- # set_up target1_br 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_ns target1 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772163 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:21:37.064 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:21:37.065 10.0.0.3 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772164 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:21:37.065 10.0.0.4 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@75 -- # set_up initiator1 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:21:37.065 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@138 -- # set_up target1_br 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 2 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo initiator0 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=initiator0 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:37.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:21:37.325 00:21:37.325 --- 10.0.0.1 ping statistics --- 00:21:37.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.325 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo target0 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=target0 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:37.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.025 ms 00:21:37.325 00:21:37.325 --- 10.0.0.2 ping statistics --- 00:21:37.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.325 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo initiator1 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=initiator1 00:21:37.325 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:21:37.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:37.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:21:37.326 00:21:37.326 --- 10.0.0.3 ping statistics --- 00:21:37.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.326 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:37.326 13:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo target1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=target1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:21:37.326 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:37.326 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:21:37.326 00:21:37.326 --- 10.0.0.4 ping statistics --- 00:21:37.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.326 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # return 0 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo initiator0 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=initiator0 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo initiator1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=initiator1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo target0 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=target0 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:37.326 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target1 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo target1 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=target1 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@35 -- # nvmfappstart -m 0x2 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@324 -- # nvmfpid=76629 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # waitforlisten 76629 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 76629 ']' 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:37.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:37.327 13:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:37.327 [2024-09-27 13:26:39.168696] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:21:37.327 [2024-09-27 13:26:39.168831] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.584 [2024-09-27 13:26:39.309110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.584 [2024-09-27 13:26:39.380588] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.584 [2024-09-27 13:26:39.380655] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.585 [2024-09-27 13:26:39.380669] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.585 [2024-09-27 13:26:39.380693] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.585 [2024-09-27 13:26:39.380704] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.585 [2024-09-27 13:26:39.380744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.585 [2024-09-27 13:26:39.415442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@38 -- # rpc_cmd 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:38.519 [2024-09-27 13:26:40.229643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.519 [2024-09-27 13:26:40.237837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:38.519 null0 00:21:38.519 [2024-09-27 13:26:40.269820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@54 -- # hostpid=76661 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@55 -- # waitforlisten 76661 /tmp/host.sock 00:21:38.519 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 76661 ']' 00:21:38.520 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:21:38.520 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:38.520 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:38.520 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:38.520 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:38.520 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:38.520 [2024-09-27 13:26:40.342529] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:21:38.520 [2024-09-27 13:26:40.342613] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76661 ] 00:21:38.779 [2024-09-27 13:26:40.479626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.779 [2024-09-27 13:26:40.551628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.779 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:38.779 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:21:38.779 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@57 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:38.779 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:38.779 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.779 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:38.779 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.779 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@61 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:38.779 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.779 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:39.038 [2024-09-27 13:26:40.656312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:39.038 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.038 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:39.038 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.038 13:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:39.976 [2024-09-27 13:26:41.693002] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:39.976 [2024-09-27 13:26:41.693062] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:39.976 [2024-09-27 13:26:41.693084] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:39.976 [2024-09-27 13:26:41.699075] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:39.976 [2024-09-27 13:26:41.756160] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:39.976 [2024-09-27 13:26:41.756261] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:39.976 [2024-09-27 13:26:41.756289] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:39.976 [2024-09-27 13:26:41.756310] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:39.976 [2024-09-27 13:26:41.756338] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:39.976 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.976 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@67 -- # wait_for_bdev nvme0n1 00:21:39.976 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:21:39.976 [2024-09-27 13:26:41.761646] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1baf1e0 was disconnected and freed. delete nvme_qpair. 00:21:39.976 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.976 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.976 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:39.976 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:21:39.976 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:21:39.976 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:21:39.976 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.976 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:39.976 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@70 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev target0 00:21:39.976 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@71 -- # ip netns exec nvmf_ns_spdk ip link set target0 down 00:21:40.234 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@74 -- # wait_for_bdev '' 00:21:40.234 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:21:40.234 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:21:40.234 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:40.234 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.234 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:40.234 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:21:40.234 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:21:40.234 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.234 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:21:40.234 13:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:21:41.171 13:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:21:41.171 13:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:21:41.171 13:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.171 13:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:21:41.171 13:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.171 13:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:41.171 13:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:21:41.171 13:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.171 13:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:21:41.171 13:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:21:42.549 13:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:21:42.549 13:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.549 13:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.549 13:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:42.549 13:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:21:42.549 13:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:21:42.549 13:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:21:42.549 13:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.549 13:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:21:42.549 13:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:21:43.485 13:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:21:43.485 13:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.485 13:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.485 13:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:21:43.485 13:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:43.485 13:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:21:43.485 13:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:21:43.485 13:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.485 13:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:21:43.485 13:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:21:44.423 13:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:21:44.423 13:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.423 13:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:21:44.423 13:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:21:44.423 13:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.423 13:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:44.423 13:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:21:44.423 13:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.423 13:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:21:44.423 13:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:21:45.358 13:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:21:45.358 13:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.358 13:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:21:45.358 13:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.358 13:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:45.358 13:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:21:45.358 13:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:21:45.358 13:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.358 [2024-09-27 13:26:47.184207] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:45.358 [2024-09-27 13:26:47.184294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.358 [2024-09-27 13:26:47.184311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.358 [2024-09-27 13:26:47.184324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.358 [2024-09-27 13:26:47.184348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.358 [2024-09-27 13:26:47.184357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.358 [2024-09-27 13:26:47.184366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.358 [2024-09-27 13:26:47.184376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.358 [2024-09-27 13:26:47.184385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.358 [2024-09-27 13:26:47.184395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.358 [2024-09-27 13:26:47.184405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.358 [2024-09-27 13:26:47.184431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8b480 is same with the state(6) to be set 00:21:45.358 [2024-09-27 13:26:47.194203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8b480 (9): Bad file descriptor 00:21:45.358 13:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:21:45.358 13:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:21:45.358 [2024-09-27 13:26:47.204244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.741 13:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:21:46.741 13:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:21:46.741 13:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.741 13:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.741 13:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:21:46.741 13:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:46.741 13:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:21:46.741 [2024-09-27 13:26:48.250775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:21:46.741 [2024-09-27 13:26:48.250902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8b480 with addr=10.0.0.2, port=4420 00:21:46.741 [2024-09-27 13:26:48.250937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8b480 is same with the state(6) to be set 00:21:46.741 [2024-09-27 13:26:48.251011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8b480 (9): Bad file descriptor 00:21:46.741 [2024-09-27 13:26:48.251972] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.741 [2024-09-27 13:26:48.252055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.741 [2024-09-27 13:26:48.252082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.741 [2024-09-27 13:26:48.252104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.741 [2024-09-27 13:26:48.252170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.741 [2024-09-27 13:26:48.252197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.741 13:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.741 13:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:21:46.741 13:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:21:47.683 [2024-09-27 13:26:49.252256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:47.683 [2024-09-27 13:26:49.252329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:47.683 [2024-09-27 13:26:49.252342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:47.683 [2024-09-27 13:26:49.252368] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:21:47.683 [2024-09-27 13:26:49.252392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.683 [2024-09-27 13:26:49.252436] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:21:47.683 [2024-09-27 13:26:49.252495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.683 [2024-09-27 13:26:49.252512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.683 [2024-09-27 13:26:49.252526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.683 [2024-09-27 13:26:49.252537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.683 [2024-09-27 13:26:49.252547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.683 [2024-09-27 13:26:49.252557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.683 [2024-09-27 13:26:49.252567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.683 [2024-09-27 13:26:49.252576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.683 [2024-09-27 13:26:49.252587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.683 [2024-09-27 13:26:49.252596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.683 [2024-09-27 13:26:49.252605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:47.683 [2024-09-27 13:26:49.253013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b17c90 (9): Bad file descriptor 00:21:47.683 [2024-09-27 13:26:49.254023] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:47.683 [2024-09-27 13:26:49.254039] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != '' ]] 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@77 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@78 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@81 -- # wait_for_bdev nvme1n1 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:47.683 13:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:21:48.617 13:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:21:48.617 13:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.617 13:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.617 13:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:48.617 13:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:21:48.617 13:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:21:48.617 13:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:21:48.617 13:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.875 13:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:48.875 13:26:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:21:49.441 [2024-09-27 13:26:51.257925] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:49.441 [2024-09-27 13:26:51.257971] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:49.441 [2024-09-27 13:26:51.257997] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:49.441 [2024-09-27 13:26:51.263982] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:21:49.699 [2024-09-27 13:26:51.320378] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:49.699 [2024-09-27 13:26:51.320432] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:49.699 [2024-09-27 13:26:51.320457] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:49.699 [2024-09-27 13:26:51.320474] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:21:49.699 [2024-09-27 13:26:51.320483] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:49.699 [2024-09-27 13:26:51.326411] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b95840 was disconnected and freed. delete nvme_qpair. 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@85 -- # killprocess 76661 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 76661 ']' 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 76661 00:21:49.699 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76661 00:21:49.957 killing process with pid 76661 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76661' 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 76661 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 76661 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # nvmftestfini 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # nvmfcleanup 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:49.957 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:49.957 rmmod nvme_tcp 00:21:50.214 rmmod nvme_fabrics 00:21:50.214 rmmod nvme_keyring 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # '[' -n 76629 ']' 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@333 -- # killprocess 76629 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 76629 ']' 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 76629 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76629 00:21:50.214 killing process with pid 76629 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76629' 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 76629 00:21:50.214 13:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 76629 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # nvmf_fini 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@264 -- # local dev 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # continue 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # continue 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@284 -- # iptr 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@538 -- # iptables-save 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@538 -- # iptables-restore 00:21:50.473 00:21:50.473 real 0m13.827s 00:21:50.473 user 0m23.278s 00:21:50.473 sys 0m2.481s 00:21:50.473 ************************************ 00:21:50.473 END TEST nvmf_discovery_remove_ifc 00:21:50.473 ************************************ 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:50.473 13:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:50.474 13:26:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:50.474 13:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:50.474 13:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:50.474 13:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.474 ************************************ 00:21:50.474 START TEST nvmf_identify_kernel_target 00:21:50.474 ************************************ 00:21:50.474 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:50.733 * Looking for test storage... 00:21:50.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:21:50.733 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:50.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.734 --rc genhtml_branch_coverage=1 00:21:50.734 --rc genhtml_function_coverage=1 00:21:50.734 --rc genhtml_legend=1 00:21:50.734 --rc geninfo_all_blocks=1 00:21:50.734 --rc geninfo_unexecuted_blocks=1 00:21:50.734 00:21:50.734 ' 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:50.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.734 --rc genhtml_branch_coverage=1 00:21:50.734 --rc genhtml_function_coverage=1 00:21:50.734 --rc genhtml_legend=1 00:21:50.734 --rc geninfo_all_blocks=1 00:21:50.734 --rc geninfo_unexecuted_blocks=1 00:21:50.734 00:21:50.734 ' 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:50.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.734 --rc genhtml_branch_coverage=1 00:21:50.734 --rc genhtml_function_coverage=1 00:21:50.734 --rc genhtml_legend=1 00:21:50.734 --rc geninfo_all_blocks=1 00:21:50.734 --rc geninfo_unexecuted_blocks=1 00:21:50.734 00:21:50.734 ' 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:50.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.734 --rc genhtml_branch_coverage=1 00:21:50.734 --rc genhtml_function_coverage=1 00:21:50.734 --rc genhtml_legend=1 00:21:50.734 --rc geninfo_all_blocks=1 00:21:50.734 --rc geninfo_unexecuted_blocks=1 00:21:50.734 00:21:50.734 ' 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:50.734 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # prepare_net_devs 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # local -g is_hw=no 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # remove_target_ns 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:21:50.734 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@276 -- # nvmf_veth_init 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@233 -- # create_target_ns 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@234 -- # create_main_bridge 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@114 -- # delete_main_bridge 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@130 -- # return 0 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@160 -- # set_up initiator0 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@160 -- # set_up target0 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set target0 up 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@161 -- # set_up target0_br 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:21:50.735 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:21:50.994 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:50.994 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_ns target0 00:21:50.994 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:21:50.994 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:21:50.994 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:21:50.994 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:21:50.994 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:50.994 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:50.994 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:21:50.994 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:50.994 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:50.994 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:21:50.995 10.0.0.1 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:50.995 10.0.0.2 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@75 -- # set_up initiator0 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@138 -- # set_up target0_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@160 -- # set_up initiator1 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@160 -- # set_up target1 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set target1 up 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@161 -- # set_up target1_br 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_ns target1 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772163 00:21:50.995 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:21:50.996 10.0.0.3 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772164 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:21:50.996 10.0.0.4 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@75 -- # set_up initiator1 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:21:50.996 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@138 -- # set_up target1_br 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo initiator0 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=initiator0 00:21:51.256 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:51.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:21:51.257 00:21:51.257 --- 10.0.0.1 ping statistics --- 00:21:51.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.257 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo target0 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=target0 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:51.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:21:51.257 00:21:51.257 --- 10.0.0.2 ping statistics --- 00:21:51.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.257 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo initiator1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=initiator1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:21:51.257 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:51.257 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:21:51.257 00:21:51.257 --- 10.0.0.3 ping statistics --- 00:21:51.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.257 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo target1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=target1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:21:51.257 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:51.257 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.107 ms 00:21:51.257 00:21:51.257 --- 10.0.0.4 ping statistics --- 00:21:51.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.257 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # return 0 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:51.257 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo initiator0 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=initiator0 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo initiator1 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=initiator1 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo target0 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=target0 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:51.258 13:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target1 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo target1 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=target1 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmet=/sys/kernel/config/nvmet 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@435 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # local block nvme 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # [[ ! -e /sys/module/nvmet ]] 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # modprobe nvmet 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:51.258 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:51.826 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:51.826 Waiting for block devices as requested 00:21:51.826 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:51.826 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # for block in /sys/block/nvme* 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # is_block_zoned nvme0n1 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # block_in_use nvme0n1 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:52.085 No valid GPT data, bailing 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # nvme=/dev/nvme0n1 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # for block in /sys/block/nvme* 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # is_block_zoned nvme0n2 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # block_in_use nvme0n2 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:52.085 No valid GPT data, bailing 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # nvme=/dev/nvme0n2 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # for block in /sys/block/nvme* 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # is_block_zoned nvme0n3 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # block_in_use nvme0n3 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:52.085 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:52.085 No valid GPT data, bailing 00:21:52.086 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:52.086 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:52.086 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:52.086 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # nvme=/dev/nvme0n3 00:21:52.086 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # for block in /sys/block/nvme* 00:21:52.086 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:52.086 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # is_block_zoned nvme1n1 00:21:52.086 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:52.086 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:52.086 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:52.086 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # block_in_use nvme1n1 00:21:52.086 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:52.086 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:52.345 No valid GPT data, bailing 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # nvme=/dev/nvme1n1 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ -b /dev/nvme1n1 ]] 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # echo 1 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@466 -- # echo /dev/nvme1n1 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo 1 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 10.0.0.1 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo tcp 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 4420 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # echo ipv4 00:21:52.345 13:26:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:52.345 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid=1dd592da-03b1-46ba-b90a-3aebb25e3723 -a 10.0.0.1 -t tcp -s 4420 00:21:52.345 00:21:52.345 Discovery Log Number of Records 2, Generation counter 2 00:21:52.345 =====Discovery Log Entry 0====== 00:21:52.345 trtype: tcp 00:21:52.345 adrfam: ipv4 00:21:52.345 subtype: current discovery subsystem 00:21:52.345 treq: not specified, sq flow control disable supported 00:21:52.345 portid: 1 00:21:52.345 trsvcid: 4420 00:21:52.345 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:52.345 traddr: 10.0.0.1 00:21:52.345 eflags: none 00:21:52.345 sectype: none 00:21:52.345 =====Discovery Log Entry 1====== 00:21:52.345 trtype: tcp 00:21:52.345 adrfam: ipv4 00:21:52.345 subtype: nvme subsystem 00:21:52.345 treq: not specified, sq flow control disable supported 00:21:52.345 portid: 1 00:21:52.345 trsvcid: 4420 00:21:52.345 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:52.345 traddr: 10.0.0.1 00:21:52.345 eflags: none 00:21:52.345 sectype: none 00:21:52.345 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:21:52.345 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:52.606 ===================================================== 00:21:52.606 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:52.606 ===================================================== 00:21:52.606 Controller Capabilities/Features 00:21:52.606 ================================ 00:21:52.606 Vendor ID: 0000 00:21:52.606 Subsystem Vendor ID: 0000 00:21:52.606 Serial Number: 092b20d000a413c92ed7 00:21:52.606 Model Number: Linux 00:21:52.606 Firmware Version: 6.8.9-20 00:21:52.606 Recommended Arb Burst: 0 00:21:52.606 IEEE OUI Identifier: 00 00 00 00:21:52.606 Multi-path I/O 00:21:52.606 May have multiple subsystem ports: No 00:21:52.606 May have multiple controllers: No 00:21:52.606 Associated with SR-IOV VF: No 00:21:52.606 Max Data Transfer Size: Unlimited 00:21:52.606 Max Number of Namespaces: 0 00:21:52.606 Max Number of I/O Queues: 1024 00:21:52.606 NVMe Specification Version (VS): 1.3 00:21:52.606 NVMe Specification Version (Identify): 1.3 00:21:52.606 Maximum Queue Entries: 1024 00:21:52.606 Contiguous Queues Required: No 00:21:52.606 Arbitration Mechanisms Supported 00:21:52.606 Weighted Round Robin: Not Supported 00:21:52.606 Vendor Specific: Not Supported 00:21:52.606 Reset Timeout: 7500 ms 00:21:52.606 Doorbell Stride: 4 bytes 00:21:52.606 NVM Subsystem Reset: Not Supported 00:21:52.606 Command Sets Supported 00:21:52.606 NVM Command Set: Supported 00:21:52.606 Boot Partition: Not Supported 00:21:52.606 Memory Page Size Minimum: 4096 bytes 00:21:52.606 Memory Page Size Maximum: 4096 bytes 00:21:52.606 Persistent Memory Region: Not Supported 00:21:52.606 Optional Asynchronous Events Supported 00:21:52.606 Namespace Attribute Notices: Not Supported 00:21:52.606 Firmware Activation Notices: Not Supported 00:21:52.606 ANA Change Notices: Not Supported 00:21:52.606 PLE Aggregate Log Change Notices: Not Supported 00:21:52.606 LBA Status Info Alert Notices: Not Supported 00:21:52.606 EGE Aggregate Log Change Notices: Not Supported 00:21:52.606 Normal NVM Subsystem Shutdown event: Not Supported 00:21:52.606 Zone Descriptor Change Notices: Not Supported 00:21:52.606 Discovery Log Change Notices: Supported 00:21:52.606 Controller Attributes 00:21:52.606 128-bit Host Identifier: Not Supported 00:21:52.606 Non-Operational Permissive Mode: Not Supported 00:21:52.606 NVM Sets: Not Supported 00:21:52.606 Read Recovery Levels: Not Supported 00:21:52.606 Endurance Groups: Not Supported 00:21:52.606 Predictable Latency Mode: Not Supported 00:21:52.606 Traffic Based Keep ALive: Not Supported 00:21:52.606 Namespace Granularity: Not Supported 00:21:52.606 SQ Associations: Not Supported 00:21:52.606 UUID List: Not Supported 00:21:52.606 Multi-Domain Subsystem: Not Supported 00:21:52.606 Fixed Capacity Management: Not Supported 00:21:52.606 Variable Capacity Management: Not Supported 00:21:52.606 Delete Endurance Group: Not Supported 00:21:52.606 Delete NVM Set: Not Supported 00:21:52.606 Extended LBA Formats Supported: Not Supported 00:21:52.606 Flexible Data Placement Supported: Not Supported 00:21:52.606 00:21:52.606 Controller Memory Buffer Support 00:21:52.606 ================================ 00:21:52.606 Supported: No 00:21:52.606 00:21:52.606 Persistent Memory Region Support 00:21:52.606 ================================ 00:21:52.606 Supported: No 00:21:52.606 00:21:52.606 Admin Command Set Attributes 00:21:52.606 ============================ 00:21:52.606 Security Send/Receive: Not Supported 00:21:52.606 Format NVM: Not Supported 00:21:52.606 Firmware Activate/Download: Not Supported 00:21:52.606 Namespace Management: Not Supported 00:21:52.606 Device Self-Test: Not Supported 00:21:52.606 Directives: Not Supported 00:21:52.606 NVMe-MI: Not Supported 00:21:52.606 Virtualization Management: Not Supported 00:21:52.606 Doorbell Buffer Config: Not Supported 00:21:52.606 Get LBA Status Capability: Not Supported 00:21:52.606 Command & Feature Lockdown Capability: Not Supported 00:21:52.606 Abort Command Limit: 1 00:21:52.606 Async Event Request Limit: 1 00:21:52.606 Number of Firmware Slots: N/A 00:21:52.606 Firmware Slot 1 Read-Only: N/A 00:21:52.606 Firmware Activation Without Reset: N/A 00:21:52.606 Multiple Update Detection Support: N/A 00:21:52.606 Firmware Update Granularity: No Information Provided 00:21:52.606 Per-Namespace SMART Log: No 00:21:52.606 Asymmetric Namespace Access Log Page: Not Supported 00:21:52.606 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:52.606 Command Effects Log Page: Not Supported 00:21:52.606 Get Log Page Extended Data: Supported 00:21:52.606 Telemetry Log Pages: Not Supported 00:21:52.606 Persistent Event Log Pages: Not Supported 00:21:52.606 Supported Log Pages Log Page: May Support 00:21:52.606 Commands Supported & Effects Log Page: Not Supported 00:21:52.606 Feature Identifiers & Effects Log Page:May Support 00:21:52.606 NVMe-MI Commands & Effects Log Page: May Support 00:21:52.606 Data Area 4 for Telemetry Log: Not Supported 00:21:52.606 Error Log Page Entries Supported: 1 00:21:52.606 Keep Alive: Not Supported 00:21:52.606 00:21:52.606 NVM Command Set Attributes 00:21:52.606 ========================== 00:21:52.606 Submission Queue Entry Size 00:21:52.606 Max: 1 00:21:52.606 Min: 1 00:21:52.606 Completion Queue Entry Size 00:21:52.606 Max: 1 00:21:52.606 Min: 1 00:21:52.606 Number of Namespaces: 0 00:21:52.606 Compare Command: Not Supported 00:21:52.606 Write Uncorrectable Command: Not Supported 00:21:52.606 Dataset Management Command: Not Supported 00:21:52.606 Write Zeroes Command: Not Supported 00:21:52.606 Set Features Save Field: Not Supported 00:21:52.606 Reservations: Not Supported 00:21:52.606 Timestamp: Not Supported 00:21:52.606 Copy: Not Supported 00:21:52.606 Volatile Write Cache: Not Present 00:21:52.606 Atomic Write Unit (Normal): 1 00:21:52.606 Atomic Write Unit (PFail): 1 00:21:52.606 Atomic Compare & Write Unit: 1 00:21:52.606 Fused Compare & Write: Not Supported 00:21:52.606 Scatter-Gather List 00:21:52.606 SGL Command Set: Supported 00:21:52.606 SGL Keyed: Not Supported 00:21:52.606 SGL Bit Bucket Descriptor: Not Supported 00:21:52.606 SGL Metadata Pointer: Not Supported 00:21:52.606 Oversized SGL: Not Supported 00:21:52.606 SGL Metadata Address: Not Supported 00:21:52.606 SGL Offset: Supported 00:21:52.606 Transport SGL Data Block: Not Supported 00:21:52.606 Replay Protected Memory Block: Not Supported 00:21:52.606 00:21:52.606 Firmware Slot Information 00:21:52.606 ========================= 00:21:52.606 Active slot: 0 00:21:52.606 00:21:52.606 00:21:52.606 Error Log 00:21:52.606 ========= 00:21:52.606 00:21:52.606 Active Namespaces 00:21:52.606 ================= 00:21:52.606 Discovery Log Page 00:21:52.606 ================== 00:21:52.606 Generation Counter: 2 00:21:52.606 Number of Records: 2 00:21:52.606 Record Format: 0 00:21:52.606 00:21:52.606 Discovery Log Entry 0 00:21:52.606 ---------------------- 00:21:52.606 Transport Type: 3 (TCP) 00:21:52.606 Address Family: 1 (IPv4) 00:21:52.606 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:52.606 Entry Flags: 00:21:52.606 Duplicate Returned Information: 0 00:21:52.606 Explicit Persistent Connection Support for Discovery: 0 00:21:52.606 Transport Requirements: 00:21:52.606 Secure Channel: Not Specified 00:21:52.606 Port ID: 1 (0x0001) 00:21:52.606 Controller ID: 65535 (0xffff) 00:21:52.606 Admin Max SQ Size: 32 00:21:52.606 Transport Service Identifier: 4420 00:21:52.606 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:52.606 Transport Address: 10.0.0.1 00:21:52.606 Discovery Log Entry 1 00:21:52.606 ---------------------- 00:21:52.606 Transport Type: 3 (TCP) 00:21:52.606 Address Family: 1 (IPv4) 00:21:52.606 Subsystem Type: 2 (NVM Subsystem) 00:21:52.606 Entry Flags: 00:21:52.606 Duplicate Returned Information: 0 00:21:52.607 Explicit Persistent Connection Support for Discovery: 0 00:21:52.607 Transport Requirements: 00:21:52.607 Secure Channel: Not Specified 00:21:52.607 Port ID: 1 (0x0001) 00:21:52.607 Controller ID: 65535 (0xffff) 00:21:52.607 Admin Max SQ Size: 32 00:21:52.607 Transport Service Identifier: 4420 00:21:52.607 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:52.607 Transport Address: 10.0.0.1 00:21:52.607 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:52.607 get_feature(0x01) failed 00:21:52.607 get_feature(0x02) failed 00:21:52.607 get_feature(0x04) failed 00:21:52.607 ===================================================== 00:21:52.607 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:52.607 ===================================================== 00:21:52.607 Controller Capabilities/Features 00:21:52.607 ================================ 00:21:52.607 Vendor ID: 0000 00:21:52.607 Subsystem Vendor ID: 0000 00:21:52.607 Serial Number: 1a6cb0802d5df695fd84 00:21:52.607 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:52.607 Firmware Version: 6.8.9-20 00:21:52.607 Recommended Arb Burst: 6 00:21:52.607 IEEE OUI Identifier: 00 00 00 00:21:52.607 Multi-path I/O 00:21:52.607 May have multiple subsystem ports: Yes 00:21:52.607 May have multiple controllers: Yes 00:21:52.607 Associated with SR-IOV VF: No 00:21:52.607 Max Data Transfer Size: Unlimited 00:21:52.607 Max Number of Namespaces: 1024 00:21:52.607 Max Number of I/O Queues: 128 00:21:52.607 NVMe Specification Version (VS): 1.3 00:21:52.607 NVMe Specification Version (Identify): 1.3 00:21:52.607 Maximum Queue Entries: 1024 00:21:52.607 Contiguous Queues Required: No 00:21:52.607 Arbitration Mechanisms Supported 00:21:52.607 Weighted Round Robin: Not Supported 00:21:52.607 Vendor Specific: Not Supported 00:21:52.607 Reset Timeout: 7500 ms 00:21:52.607 Doorbell Stride: 4 bytes 00:21:52.607 NVM Subsystem Reset: Not Supported 00:21:52.607 Command Sets Supported 00:21:52.607 NVM Command Set: Supported 00:21:52.607 Boot Partition: Not Supported 00:21:52.607 Memory Page Size Minimum: 4096 bytes 00:21:52.607 Memory Page Size Maximum: 4096 bytes 00:21:52.607 Persistent Memory Region: Not Supported 00:21:52.607 Optional Asynchronous Events Supported 00:21:52.607 Namespace Attribute Notices: Supported 00:21:52.607 Firmware Activation Notices: Not Supported 00:21:52.607 ANA Change Notices: Supported 00:21:52.607 PLE Aggregate Log Change Notices: Not Supported 00:21:52.607 LBA Status Info Alert Notices: Not Supported 00:21:52.607 EGE Aggregate Log Change Notices: Not Supported 00:21:52.607 Normal NVM Subsystem Shutdown event: Not Supported 00:21:52.607 Zone Descriptor Change Notices: Not Supported 00:21:52.607 Discovery Log Change Notices: Not Supported 00:21:52.607 Controller Attributes 00:21:52.607 128-bit Host Identifier: Supported 00:21:52.607 Non-Operational Permissive Mode: Not Supported 00:21:52.607 NVM Sets: Not Supported 00:21:52.607 Read Recovery Levels: Not Supported 00:21:52.607 Endurance Groups: Not Supported 00:21:52.607 Predictable Latency Mode: Not Supported 00:21:52.607 Traffic Based Keep ALive: Supported 00:21:52.607 Namespace Granularity: Not Supported 00:21:52.607 SQ Associations: Not Supported 00:21:52.607 UUID List: Not Supported 00:21:52.607 Multi-Domain Subsystem: Not Supported 00:21:52.607 Fixed Capacity Management: Not Supported 00:21:52.607 Variable Capacity Management: Not Supported 00:21:52.607 Delete Endurance Group: Not Supported 00:21:52.607 Delete NVM Set: Not Supported 00:21:52.607 Extended LBA Formats Supported: Not Supported 00:21:52.607 Flexible Data Placement Supported: Not Supported 00:21:52.607 00:21:52.607 Controller Memory Buffer Support 00:21:52.607 ================================ 00:21:52.607 Supported: No 00:21:52.607 00:21:52.607 Persistent Memory Region Support 00:21:52.607 ================================ 00:21:52.607 Supported: No 00:21:52.607 00:21:52.607 Admin Command Set Attributes 00:21:52.607 ============================ 00:21:52.607 Security Send/Receive: Not Supported 00:21:52.607 Format NVM: Not Supported 00:21:52.607 Firmware Activate/Download: Not Supported 00:21:52.607 Namespace Management: Not Supported 00:21:52.607 Device Self-Test: Not Supported 00:21:52.607 Directives: Not Supported 00:21:52.607 NVMe-MI: Not Supported 00:21:52.607 Virtualization Management: Not Supported 00:21:52.607 Doorbell Buffer Config: Not Supported 00:21:52.607 Get LBA Status Capability: Not Supported 00:21:52.607 Command & Feature Lockdown Capability: Not Supported 00:21:52.607 Abort Command Limit: 4 00:21:52.607 Async Event Request Limit: 4 00:21:52.607 Number of Firmware Slots: N/A 00:21:52.607 Firmware Slot 1 Read-Only: N/A 00:21:52.607 Firmware Activation Without Reset: N/A 00:21:52.607 Multiple Update Detection Support: N/A 00:21:52.607 Firmware Update Granularity: No Information Provided 00:21:52.607 Per-Namespace SMART Log: Yes 00:21:52.607 Asymmetric Namespace Access Log Page: Supported 00:21:52.607 ANA Transition Time : 10 sec 00:21:52.607 00:21:52.607 Asymmetric Namespace Access Capabilities 00:21:52.607 ANA Optimized State : Supported 00:21:52.607 ANA Non-Optimized State : Supported 00:21:52.607 ANA Inaccessible State : Supported 00:21:52.607 ANA Persistent Loss State : Supported 00:21:52.607 ANA Change State : Supported 00:21:52.607 ANAGRPID is not changed : No 00:21:52.607 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:52.607 00:21:52.607 ANA Group Identifier Maximum : 128 00:21:52.607 Number of ANA Group Identifiers : 128 00:21:52.607 Max Number of Allowed Namespaces : 1024 00:21:52.607 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:52.607 Command Effects Log Page: Supported 00:21:52.607 Get Log Page Extended Data: Supported 00:21:52.607 Telemetry Log Pages: Not Supported 00:21:52.607 Persistent Event Log Pages: Not Supported 00:21:52.607 Supported Log Pages Log Page: May Support 00:21:52.607 Commands Supported & Effects Log Page: Not Supported 00:21:52.607 Feature Identifiers & Effects Log Page:May Support 00:21:52.607 NVMe-MI Commands & Effects Log Page: May Support 00:21:52.607 Data Area 4 for Telemetry Log: Not Supported 00:21:52.607 Error Log Page Entries Supported: 128 00:21:52.607 Keep Alive: Supported 00:21:52.607 Keep Alive Granularity: 1000 ms 00:21:52.607 00:21:52.607 NVM Command Set Attributes 00:21:52.607 ========================== 00:21:52.607 Submission Queue Entry Size 00:21:52.607 Max: 64 00:21:52.607 Min: 64 00:21:52.607 Completion Queue Entry Size 00:21:52.607 Max: 16 00:21:52.607 Min: 16 00:21:52.607 Number of Namespaces: 1024 00:21:52.607 Compare Command: Not Supported 00:21:52.607 Write Uncorrectable Command: Not Supported 00:21:52.607 Dataset Management Command: Supported 00:21:52.607 Write Zeroes Command: Supported 00:21:52.607 Set Features Save Field: Not Supported 00:21:52.607 Reservations: Not Supported 00:21:52.607 Timestamp: Not Supported 00:21:52.607 Copy: Not Supported 00:21:52.607 Volatile Write Cache: Present 00:21:52.607 Atomic Write Unit (Normal): 1 00:21:52.607 Atomic Write Unit (PFail): 1 00:21:52.607 Atomic Compare & Write Unit: 1 00:21:52.607 Fused Compare & Write: Not Supported 00:21:52.607 Scatter-Gather List 00:21:52.607 SGL Command Set: Supported 00:21:52.607 SGL Keyed: Not Supported 00:21:52.607 SGL Bit Bucket Descriptor: Not Supported 00:21:52.607 SGL Metadata Pointer: Not Supported 00:21:52.607 Oversized SGL: Not Supported 00:21:52.607 SGL Metadata Address: Not Supported 00:21:52.607 SGL Offset: Supported 00:21:52.607 Transport SGL Data Block: Not Supported 00:21:52.607 Replay Protected Memory Block: Not Supported 00:21:52.607 00:21:52.607 Firmware Slot Information 00:21:52.607 ========================= 00:21:52.607 Active slot: 0 00:21:52.607 00:21:52.607 Asymmetric Namespace Access 00:21:52.607 =========================== 00:21:52.607 Change Count : 0 00:21:52.607 Number of ANA Group Descriptors : 1 00:21:52.607 ANA Group Descriptor : 0 00:21:52.607 ANA Group ID : 1 00:21:52.607 Number of NSID Values : 1 00:21:52.607 Change Count : 0 00:21:52.607 ANA State : 1 00:21:52.607 Namespace Identifier : 1 00:21:52.607 00:21:52.607 Commands Supported and Effects 00:21:52.607 ============================== 00:21:52.607 Admin Commands 00:21:52.607 -------------- 00:21:52.607 Get Log Page (02h): Supported 00:21:52.607 Identify (06h): Supported 00:21:52.607 Abort (08h): Supported 00:21:52.607 Set Features (09h): Supported 00:21:52.607 Get Features (0Ah): Supported 00:21:52.607 Asynchronous Event Request (0Ch): Supported 00:21:52.607 Keep Alive (18h): Supported 00:21:52.607 I/O Commands 00:21:52.607 ------------ 00:21:52.607 Flush (00h): Supported 00:21:52.607 Write (01h): Supported LBA-Change 00:21:52.607 Read (02h): Supported 00:21:52.608 Write Zeroes (08h): Supported LBA-Change 00:21:52.608 Dataset Management (09h): Supported 00:21:52.608 00:21:52.608 Error Log 00:21:52.608 ========= 00:21:52.608 Entry: 0 00:21:52.608 Error Count: 0x3 00:21:52.608 Submission Queue Id: 0x0 00:21:52.608 Command Id: 0x5 00:21:52.608 Phase Bit: 0 00:21:52.608 Status Code: 0x2 00:21:52.608 Status Code Type: 0x0 00:21:52.608 Do Not Retry: 1 00:21:52.608 Error Location: 0x28 00:21:52.608 LBA: 0x0 00:21:52.608 Namespace: 0x0 00:21:52.608 Vendor Log Page: 0x0 00:21:52.608 ----------- 00:21:52.608 Entry: 1 00:21:52.608 Error Count: 0x2 00:21:52.608 Submission Queue Id: 0x0 00:21:52.608 Command Id: 0x5 00:21:52.608 Phase Bit: 0 00:21:52.608 Status Code: 0x2 00:21:52.608 Status Code Type: 0x0 00:21:52.608 Do Not Retry: 1 00:21:52.608 Error Location: 0x28 00:21:52.608 LBA: 0x0 00:21:52.608 Namespace: 0x0 00:21:52.608 Vendor Log Page: 0x0 00:21:52.608 ----------- 00:21:52.608 Entry: 2 00:21:52.608 Error Count: 0x1 00:21:52.608 Submission Queue Id: 0x0 00:21:52.608 Command Id: 0x4 00:21:52.608 Phase Bit: 0 00:21:52.608 Status Code: 0x2 00:21:52.608 Status Code Type: 0x0 00:21:52.608 Do Not Retry: 1 00:21:52.608 Error Location: 0x28 00:21:52.608 LBA: 0x0 00:21:52.608 Namespace: 0x0 00:21:52.608 Vendor Log Page: 0x0 00:21:52.608 00:21:52.608 Number of Queues 00:21:52.608 ================ 00:21:52.608 Number of I/O Submission Queues: 128 00:21:52.608 Number of I/O Completion Queues: 128 00:21:52.608 00:21:52.608 ZNS Specific Controller Data 00:21:52.608 ============================ 00:21:52.608 Zone Append Size Limit: 0 00:21:52.608 00:21:52.608 00:21:52.608 Active Namespaces 00:21:52.608 ================= 00:21:52.608 get_feature(0x05) failed 00:21:52.608 Namespace ID:1 00:21:52.608 Command Set Identifier: NVM (00h) 00:21:52.608 Deallocate: Supported 00:21:52.608 Deallocated/Unwritten Error: Not Supported 00:21:52.608 Deallocated Read Value: Unknown 00:21:52.608 Deallocate in Write Zeroes: Not Supported 00:21:52.608 Deallocated Guard Field: 0xFFFF 00:21:52.608 Flush: Supported 00:21:52.608 Reservation: Not Supported 00:21:52.608 Namespace Sharing Capabilities: Multiple Controllers 00:21:52.608 Size (in LBAs): 1310720 (5GiB) 00:21:52.608 Capacity (in LBAs): 1310720 (5GiB) 00:21:52.608 Utilization (in LBAs): 1310720 (5GiB) 00:21:52.608 UUID: 13c1a17f-2def-4d33-a280-4273921ae503 00:21:52.608 Thin Provisioning: Not Supported 00:21:52.608 Per-NS Atomic Units: Yes 00:21:52.608 Atomic Boundary Size (Normal): 0 00:21:52.608 Atomic Boundary Size (PFail): 0 00:21:52.608 Atomic Boundary Offset: 0 00:21:52.608 NGUID/EUI64 Never Reused: No 00:21:52.608 ANA group ID: 1 00:21:52.608 Namespace Write Protected: No 00:21:52.608 Number of LBA Formats: 1 00:21:52.608 Current LBA Format: LBA Format #00 00:21:52.608 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:21:52.608 00:21:52.608 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:52.608 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@331 -- # nvmfcleanup 00:21:52.608 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:21:52.608 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:52.608 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:21:52.608 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:52.608 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:52.608 rmmod nvme_tcp 00:21:52.868 rmmod nvme_fabrics 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # '[' -n '' ']' 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # nvmf_fini 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@264 -- # local dev 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # continue 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # continue 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@284 -- # iptr 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@538 -- # iptables-save 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@538 -- # iptables-restore 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # echo 0 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # modules=(/sys/module/nvmet/holders/*) 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # modprobe -r nvmet_tcp nvmet 00:21:52.868 13:26:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:53.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:53.801 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:53.801 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:53.801 00:21:53.801 real 0m3.264s 00:21:53.801 user 0m1.208s 00:21:53.801 sys 0m1.513s 00:21:53.801 13:26:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:53.801 ************************************ 00:21:53.801 END TEST nvmf_identify_kernel_target 00:21:53.801 13:26:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.801 ************************************ 00:21:53.801 13:26:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:53.801 13:26:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:53.801 13:26:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:53.801 13:26:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.801 ************************************ 00:21:53.801 START TEST nvmf_auth_host 00:21:53.801 ************************************ 00:21:53.801 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:54.060 * Looking for test storage... 00:21:54.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:54.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.060 --rc genhtml_branch_coverage=1 00:21:54.060 --rc genhtml_function_coverage=1 00:21:54.060 --rc genhtml_legend=1 00:21:54.060 --rc geninfo_all_blocks=1 00:21:54.060 --rc geninfo_unexecuted_blocks=1 00:21:54.060 00:21:54.060 ' 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:54.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.060 --rc genhtml_branch_coverage=1 00:21:54.060 --rc genhtml_function_coverage=1 00:21:54.060 --rc genhtml_legend=1 00:21:54.060 --rc geninfo_all_blocks=1 00:21:54.060 --rc geninfo_unexecuted_blocks=1 00:21:54.060 00:21:54.060 ' 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:54.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.060 --rc genhtml_branch_coverage=1 00:21:54.060 --rc genhtml_function_coverage=1 00:21:54.060 --rc genhtml_legend=1 00:21:54.060 --rc geninfo_all_blocks=1 00:21:54.060 --rc geninfo_unexecuted_blocks=1 00:21:54.060 00:21:54.060 ' 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:54.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.060 --rc genhtml_branch_coverage=1 00:21:54.060 --rc genhtml_function_coverage=1 00:21:54.060 --rc genhtml_legend=1 00:21:54.060 --rc geninfo_all_blocks=1 00:21:54.060 --rc geninfo_unexecuted_blocks=1 00:21:54.060 00:21:54.060 ' 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=1dd592da-03b1-46ba-b90a-3aebb25e3723 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.060 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:54.061 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # '[' -z tcp ']' 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # prepare_net_devs 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # local -g is_hw=no 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # remove_target_ns 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # [[ virt != virt ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # [[ no == yes ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # [[ virt == phy ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # [[ virt == phy-fallback ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@275 -- # [[ tcp == tcp ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@276 -- # nvmf_veth_init 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@231 -- # local total_initiator_target_pairs=2 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@233 -- # create_target_ns 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@234 -- # create_main_bridge 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@114 -- # delete_main_bridge 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@130 -- # return 0 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@116 -- # ip link add nvmf_br type bridge 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # set_up nvmf_br 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=nvmf_br in_ns= 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set nvmf_br up' 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set nvmf_br up 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@119 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@537 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@235 -- # setup_interfaces 2 veth 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # create_veth initiator0 initiator0_br 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local dev=initiator0 peer=initiator0_br 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@158 -- # ip link add initiator0 type veth peer name initiator0_br 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@160 -- # set_up initiator0 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@161 -- # set_up initiator0_br 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@68 -- # create_veth target0 target0_br 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local dev=target0 peer=target0_br 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@158 -- # ip link add target0 type veth peer name target0_br 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@160 -- # set_up target0 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=target0 in_ns= 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set target0 up' 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set target0 up 00:21:54.061 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@161 -- # set_up target0_br 00:21:54.062 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:21:54.062 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.062 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:21:54.062 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:21:54.062 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:54.062 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_ns target0 00:21:54.062 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # local dev=target0 ns=nvmf_ns_spdk 00:21:54.062 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@153 -- # ip link set target0 netns nvmf_ns_spdk 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # set_ip initiator0 167772161 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0 ip=167772161 in_ns= 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev initiator0 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator0/ifalias 00:21:54.321 10.0.0.1 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:21:54.321 10.0.0.2 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@75 -- # set_up initiator0 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=initiator0 in_ns= 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0 up' 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set initiator0 up 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # set_up target0 NVMF_TARGET_NS_CMD 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@78 -- # add_to_bridge initiator0_br 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@135 -- # local dev=initiator0_br bridge=nvmf_br 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@136 -- # ip link set initiator0_br master nvmf_br 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@138 -- # set_up initiator0_br 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=initiator0_br in_ns= 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator0_br up' 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set initiator0_br up 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@79 -- # add_to_bridge target0_br 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@135 -- # local dev=target0_br bridge=nvmf_br 00:21:54.321 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@136 -- # ip link set target0_br master nvmf_br 00:21:54.321 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@138 -- # set_up target0_br 00:21:54.321 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=target0_br in_ns= 00:21:54.321 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.321 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set target0_br up' 00:21:54.321 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set target0_br up 00:21:54.321 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:54.321 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator0 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target0 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator1 key_target=target1 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ veth == phy ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # [[ veth == veth ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # create_veth initiator1 initiator1_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local dev=initiator1 peer=initiator1_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@158 -- # ip link add initiator1 type veth peer name initiator1_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@160 -- # set_up initiator1 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@161 -- # set_up initiator1_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@68 -- # [[ veth == veth ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@68 -- # create_veth target1 target1_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local dev=target1 peer=target1_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@158 -- # ip link add target1 type veth peer name target1_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@160 -- # set_up target1 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=target1 in_ns= 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set target1 up' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set target1 up 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@161 -- # set_up target1_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_ns target1 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # local dev=target1 ns=nvmf_ns_spdk 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@153 -- # ip link set target1 netns nvmf_ns_spdk 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # set_ip initiator1 167772163 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1 ip=167772163 in_ns= 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772163 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772163 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.3 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.3/24 dev initiator1 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.3 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/initiator1/ifalias 00:21:54.322 10.0.0.3 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772164 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772164 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.4 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.4 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:21:54.322 10.0.0.4 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@75 -- # set_up initiator1 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=initiator1 in_ns= 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1 up' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set initiator1 up 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # set_up target1 NVMF_TARGET_NS_CMD 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@78 -- # [[ veth == veth ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@78 -- # add_to_bridge initiator1_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@135 -- # local dev=initiator1_br bridge=nvmf_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@136 -- # ip link set initiator1_br master nvmf_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@138 -- # set_up initiator1_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=initiator1_br in_ns= 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set initiator1_br up' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set initiator1_br up 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@79 -- # [[ veth == veth ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@79 -- # add_to_bridge target1_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@135 -- # local dev=target1_br bridge=nvmf_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@136 -- # ip link set target1_br master nvmf_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@138 -- # set_up target1_br 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=target1_br in_ns= 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set target1_br up' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set target1_br up 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@537 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=initiator1 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=target1 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 2 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@96 -- # local pairs=2 pair 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:54.322 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo initiator0 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=initiator0 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:54.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:21:54.583 00:21:54.583 --- 10.0.0.1 ping statistics --- 00:21:54.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.583 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo target0 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=target0 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:54.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:21:54.583 00:21:54.583 --- 10.0.0.2 ping statistics --- 00:21:54.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.583 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo initiator1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=initiator1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:21:54.583 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:54.583 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:21:54.583 00:21:54.583 --- 10.0.0.3 ping statistics --- 00:21:54.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.583 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo target1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=target1 00:21:54.583 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.4 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.4 in_ns= count=1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.4' 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.4 00:21:54.584 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:54.584 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 00:21:54.584 00:21:54.584 --- 10.0.0.4 ping statistics --- 00:21:54.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.584 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@237 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # return 0 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # '[' '' == iso ']' 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # nvmf_legacy_env 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=target0 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=target1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo initiator0 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=initiator0 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator0/ifalias 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo initiator1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=initiator1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/initiator1/ifalias 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.3 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.3 ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.3 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo target0 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=target0 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo target1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=target1 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.4 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.4 ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.4 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@305 -- # [[ tcp == \r\d\m\a ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@307 -- # [[ tcp == \t\c\p ]] 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@311 -- # '[' tcp == tcp ']' 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # modprobe nvme-tcp 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # timing_enter start_nvmf_tgt 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@324 -- # nvmfpid=77647 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # waitforlisten 77647 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77647 ']' 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@323 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.584 13:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_exit start_nvmf_tgt 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # local digest len file key 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # local -A digests 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # digest=null 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # len=32 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # key=0db76ef803e8dfba04276ca2bda67e06 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # mktemp -t spdk.key-null.XXX 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-null.Yjw 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@527 -- # format_dhchap_key 0db76ef803e8dfba04276ca2bda67e06 0 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # format_key DHHC-1 0db76ef803e8dfba04276ca2bda67e06 0 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # local prefix key digest 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # key=0db76ef803e8dfba04276ca2bda67e06 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # digest=0 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # python - 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-null.Yjw 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-null.Yjw 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Yjw 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # local digest len file key 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # local -A digests 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # digest=sha512 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # len=64 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # key=992a2891814b5da54c5ca52309d7b405a7935540c95236d8718a21bb23788d18 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # mktemp -t spdk.key-sha512.XXX 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-sha512.0C8 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@527 -- # format_dhchap_key 992a2891814b5da54c5ca52309d7b405a7935540c95236d8718a21bb23788d18 3 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # format_key DHHC-1 992a2891814b5da54c5ca52309d7b405a7935540c95236d8718a21bb23788d18 3 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # local prefix key digest 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # key=992a2891814b5da54c5ca52309d7b405a7935540c95236d8718a21bb23788d18 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # digest=3 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # python - 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-sha512.0C8 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-sha512.0C8 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.0C8 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # local digest len file key 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # local -A digests 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # digest=null 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # len=48 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # key=02b88b5d04111ba3f155fc7919a6a518f9908758fb7c734a 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # mktemp -t spdk.key-null.XXX 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-null.WNw 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@527 -- # format_dhchap_key 02b88b5d04111ba3f155fc7919a6a518f9908758fb7c734a 0 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # format_key DHHC-1 02b88b5d04111ba3f155fc7919a6a518f9908758fb7c734a 0 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # local prefix key digest 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # key=02b88b5d04111ba3f155fc7919a6a518f9908758fb7c734a 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # digest=0 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # python - 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-null.WNw 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-null.WNw 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.WNw 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # local digest len file key 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # local -A digests 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # digest=sha384 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # len=48 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # key=5e23f9058f6ce74aa7f196bb9f355ea637b2a8c0a863fdf6 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # mktemp -t spdk.key-sha384.XXX 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-sha384.uE1 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@527 -- # format_dhchap_key 5e23f9058f6ce74aa7f196bb9f355ea637b2a8c0a863fdf6 2 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # format_key DHHC-1 5e23f9058f6ce74aa7f196bb9f355ea637b2a8c0a863fdf6 2 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # local prefix key digest 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # key=5e23f9058f6ce74aa7f196bb9f355ea637b2a8c0a863fdf6 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # digest=2 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # python - 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-sha384.uE1 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-sha384.uE1 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.uE1 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # local digest len file key 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # local -A digests 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # digest=sha256 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # len=32 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # key=a150d451975c757fd4adca16b35d9a19 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # mktemp -t spdk.key-sha256.XXX 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-sha256.BXJ 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@527 -- # format_dhchap_key a150d451975c757fd4adca16b35d9a19 1 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # format_key DHHC-1 a150d451975c757fd4adca16b35d9a19 1 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # local prefix key digest 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # key=a150d451975c757fd4adca16b35d9a19 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # digest=1 00:21:55.967 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # python - 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-sha256.BXJ 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-sha256.BXJ 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.BXJ 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # local digest len file key 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # local -A digests 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # digest=sha256 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # len=32 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # key=c97092a504f13a5ff59c5f09ca52524a 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # mktemp -t spdk.key-sha256.XXX 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-sha256.nIX 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@527 -- # format_dhchap_key c97092a504f13a5ff59c5f09ca52524a 1 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # format_key DHHC-1 c97092a504f13a5ff59c5f09ca52524a 1 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # local prefix key digest 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # key=c97092a504f13a5ff59c5f09ca52524a 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # digest=1 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # python - 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-sha256.nIX 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-sha256.nIX 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.nIX 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # local digest len file key 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # local -A digests 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # digest=sha384 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # len=48 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # key=2e84c49bf1c8f6e55704a8394260d09215fd4da004c42c7b 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # mktemp -t spdk.key-sha384.XXX 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-sha384.Pap 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@527 -- # format_dhchap_key 2e84c49bf1c8f6e55704a8394260d09215fd4da004c42c7b 2 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # format_key DHHC-1 2e84c49bf1c8f6e55704a8394260d09215fd4da004c42c7b 2 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # local prefix key digest 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # key=2e84c49bf1c8f6e55704a8394260d09215fd4da004c42c7b 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # digest=2 00:21:56.227 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # python - 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-sha384.Pap 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-sha384.Pap 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Pap 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # local digest len file key 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # local -A digests 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # digest=null 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # len=32 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # key=8f0d5aa04661275e9f9e66e89feaf4df 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # mktemp -t spdk.key-null.XXX 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-null.9tr 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@527 -- # format_dhchap_key 8f0d5aa04661275e9f9e66e89feaf4df 0 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # format_key DHHC-1 8f0d5aa04661275e9f9e66e89feaf4df 0 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # local prefix key digest 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # key=8f0d5aa04661275e9f9e66e89feaf4df 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # digest=0 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # python - 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-null.9tr 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-null.9tr 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.9tr 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # local digest len file key 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # local -A digests 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # digest=sha512 00:21:56.228 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # len=64 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # key=add97ab2f7a205a0d002b23518843834d93fa7ff7d66f8c613ebec5ed6fe1bf2 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # mktemp -t spdk.key-sha512.XXX 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # file=/tmp/spdk.key-sha512.YWd 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@527 -- # format_dhchap_key add97ab2f7a205a0d002b23518843834d93fa7ff7d66f8c613ebec5ed6fe1bf2 3 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # format_key DHHC-1 add97ab2f7a205a0d002b23518843834d93fa7ff7d66f8c613ebec5ed6fe1bf2 3 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # local prefix key digest 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # prefix=DHHC-1 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # key=add97ab2f7a205a0d002b23518843834d93fa7ff7d66f8c613ebec5ed6fe1bf2 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # digest=3 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # python - 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # chmod 0600 /tmp/spdk.key-sha512.YWd 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # echo /tmp/spdk.key-sha512.YWd 00:21:56.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.YWd 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77647 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77647 ']' 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.488 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Yjw 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.0C8 ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0C8 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.WNw 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.uE1 ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uE1 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.BXJ 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.nIX ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nIX 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Pap 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.9tr ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.9tr 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.YWd 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:21:56.748 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmet=/sys/kernel/config/nvmet 00:21:56.749 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:56.749 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:56.749 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@435 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:56.749 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # local block nvme 00:21:56.749 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # [[ ! -e /sys/module/nvmet ]] 00:21:56.749 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # modprobe nvmet 00:21:56.749 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:56.749 13:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:57.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:57.318 Waiting for block devices as requested 00:21:57.318 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:57.318 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # for block in /sys/block/nvme* 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # is_block_zoned nvme0n1 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # block_in_use nvme0n1 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:57.886 No valid GPT data, bailing 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # nvme=/dev/nvme0n1 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # for block in /sys/block/nvme* 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # is_block_zoned nvme0n2 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # block_in_use nvme0n2 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:57.886 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:58.145 No valid GPT data, bailing 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # nvme=/dev/nvme0n2 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # for block in /sys/block/nvme* 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # is_block_zoned nvme0n3 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # block_in_use nvme0n3 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:58.145 No valid GPT data, bailing 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # nvme=/dev/nvme0n3 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # for block in /sys/block/nvme* 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # is_block_zoned nvme1n1 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # block_in_use nvme1n1 00:21:58.145 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:58.146 No valid GPT data, bailing 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # nvme=/dev/nvme1n1 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ -b /dev/nvme1n1 ]] 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@463 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # echo 1 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@466 -- # echo /dev/nvme1n1 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo 1 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 10.0.0.1 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo tcp 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 4420 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # echo ipv4 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:58.146 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1dd592da-03b1-46ba-b90a-3aebb25e3723 --hostid=1dd592da-03b1-46ba-b90a-3aebb25e3723 -a 10.0.0.1 -t tcp -s 4420 00:21:58.404 00:21:58.404 Discovery Log Number of Records 2, Generation counter 2 00:21:58.404 =====Discovery Log Entry 0====== 00:21:58.404 trtype: tcp 00:21:58.404 adrfam: ipv4 00:21:58.404 subtype: current discovery subsystem 00:21:58.404 treq: not specified, sq flow control disable supported 00:21:58.404 portid: 1 00:21:58.404 trsvcid: 4420 00:21:58.404 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:58.404 traddr: 10.0.0.1 00:21:58.404 eflags: none 00:21:58.404 sectype: none 00:21:58.404 =====Discovery Log Entry 1====== 00:21:58.404 trtype: tcp 00:21:58.404 adrfam: ipv4 00:21:58.404 subtype: nvme subsystem 00:21:58.404 treq: not specified, sq flow control disable supported 00:21:58.404 portid: 1 00:21:58.404 trsvcid: 4420 00:21:58.404 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:58.404 traddr: 10.0.0.1 00:21:58.404 eflags: none 00:21:58.404 sectype: none 00:21:58.404 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:58.404 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:21:58.404 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:58.404 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:58.404 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.404 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.404 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:58.404 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:58.404 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:21:58.404 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:21:58.404 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.404 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:58.404 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.405 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.664 nvme0n1 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.664 nvme0n1 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:58.664 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:21:58.665 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:21:58.665 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:21:58.665 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:21:58.665 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.665 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:58.665 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.923 nvme0n1 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.923 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.182 nvme0n1 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.182 nvme0n1 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.182 13:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.182 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.182 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.182 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.182 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.461 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.462 nvme0n1 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:59.462 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.058 nvme0n1 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.058 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:00.059 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:00.059 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:00.059 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.059 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:00.059 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.059 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.059 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.059 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.059 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.059 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.317 nvme0n1 00:22:00.318 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.318 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.318 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.318 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.318 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.318 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.318 nvme0n1 00:22:00.318 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.577 nvme0n1 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.577 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.836 nvme0n1 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:00.836 13:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.773 nvme0n1 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:01.773 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.774 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.033 nvme0n1 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.033 13:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.293 nvme0n1 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:02.293 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.552 nvme0n1 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.552 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 nvme0n1 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.812 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:03.071 13:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.975 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.234 nvme0n1 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.234 13:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.494 nvme0n1 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.494 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.062 nvme0n1 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:06.062 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.063 13:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.321 nvme0n1 00:22:06.321 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.321 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.321 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.321 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.321 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.321 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.580 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.840 nvme0n1 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.840 13:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.778 nvme0n1 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.778 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.779 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.779 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.779 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.779 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.346 nvme0n1 00:22:08.346 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.346 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.346 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.346 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.346 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.346 13:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.346 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.914 nvme0n1 00:22:08.914 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.914 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.914 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.914 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.914 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.914 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.173 13:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.741 nvme0n1 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.741 13:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.308 nvme0n1 00:22:10.308 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.308 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.308 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.308 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.308 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.308 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.566 nvme0n1 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:10.566 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.567 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.825 nvme0n1 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.825 nvme0n1 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.825 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.083 nvme0n1 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:11.083 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:11.084 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:11.084 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.084 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:11.084 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:11.084 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:11.084 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.084 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:11.084 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.084 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.084 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.084 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:11.084 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.084 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.341 nvme0n1 00:22:11.341 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.341 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.341 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.341 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.341 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.341 13:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.341 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.599 nvme0n1 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.599 nvme0n1 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.599 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.857 nvme0n1 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.857 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.858 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:11.858 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.858 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.115 nvme0n1 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.115 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.373 nvme0n1 00:22:12.373 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.373 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.373 13:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.373 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.632 nvme0n1 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.632 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.893 nvme0n1 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.893 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.152 nvme0n1 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.152 13:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.410 nvme0n1 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.410 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.668 nvme0n1 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:13.668 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.669 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.235 nvme0n1 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.235 13:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.494 nvme0n1 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.494 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.059 nvme0n1 00:22:15.059 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.059 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.059 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.059 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:15.059 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.059 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.059 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.060 13:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.318 nvme0n1 00:22:15.318 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.318 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.318 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:15.318 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.318 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.318 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.576 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.835 nvme0n1 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.835 13:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.770 nvme0n1 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.770 13:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.337 nvme0n1 00:22:17.337 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.337 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.337 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:17.337 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.337 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.337 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.337 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.337 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.337 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.337 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.596 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.163 nvme0n1 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.163 13:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.116 nvme0n1 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.116 13:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.684 nvme0n1 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.684 nvme0n1 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.684 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.944 nvme0n1 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.944 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.203 nvme0n1 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:20.203 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.204 13:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.204 nvme0n1 00:22:20.204 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.204 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.204 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.204 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.204 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.204 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.463 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.464 nvme0n1 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.464 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.724 nvme0n1 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.724 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.984 nvme0n1 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.984 nvme0n1 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.984 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.243 13:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.243 nvme0n1 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.243 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.502 nvme0n1 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.502 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.761 nvme0n1 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:21.761 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.762 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:21.762 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.762 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.762 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.762 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.762 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.762 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.020 nvme0n1 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.020 13:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.279 nvme0n1 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.279 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.538 nvme0n1 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:22.538 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:22.539 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.539 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:22.539 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.539 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.539 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.798 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:22.798 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.798 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.798 nvme0n1 00:22:22.798 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.798 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.798 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.798 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.798 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.798 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.798 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.798 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.798 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.798 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.057 13:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.316 nvme0n1 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:23.316 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.317 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.885 nvme0n1 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.885 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.144 nvme0n1 00:22:24.144 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.144 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.144 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.144 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.144 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.403 13:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.403 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.661 nvme0n1 00:22:24.661 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.661 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.661 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.661 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.661 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.661 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:22:24.920 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.921 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:24.921 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:24.921 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:24.921 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.921 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:24.921 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.921 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.921 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.921 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:24.921 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.921 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.180 nvme0n1 00:22:25.180 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.180 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.180 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.180 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.180 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.180 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.180 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.180 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.180 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.180 13:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGRiNzZlZjgwM2U4ZGZiYTA0Mjc2Y2EyYmRhNjdlMDYJdbua: 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: ]] 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYTI4OTE4MTRiNWRhNTRjNWNhNTIzMDlkN2I0MDVhNzkzNTU0MGM5NTIzNmQ4NzE4YTIxYmIyMzc4OGQxOFKKwQY=: 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.180 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.118 nvme0n1 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.118 13:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.685 nvme0n1 00:22:26.685 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.685 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.685 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.685 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.685 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.685 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.685 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.685 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.685 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.685 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.685 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.685 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:26.685 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE1MGQ0NTE5NzVjNzU3ZmQ0YWRjYTE2YjM1ZDlhMTm7d066: 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: ]] 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk3MDkyYTUwNGYxM2E1ZmY1OWM1ZjA5Y2E1MjUyNGHHM5JE: 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.686 13:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.621 nvme0n1 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:27.621 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU4NGM0OWJmMWM4ZjZlNTU3MDRhODM5NDI2MGQwOTIxNWZkNGRhMDA0YzQyYzdiZeFzEQ==: 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: ]] 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGYwZDVhYTA0NjYxMjc1ZTlmOWU2NmU4OWZlYWY0ZGYoaHMX: 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.622 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.191 nvme0n1 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWRkOTdhYjJmN2EyMDVhMGQwMDJiMjM1MTg4NDM4MzRkOTNmYTdmZjdkNjZmOGM2MTNlYmVjNWVkNmZlMWJmMpbJ/qE=: 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.191 13:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.123 nvme0n1 00:22:29.123 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.123 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.123 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.123 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.123 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.123 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJiODhiNWQwNDExMWJhM2YxNTVmYzc5MTlhNmE1MThmOTkwODc1OGZiN2M3MzRhbicFNQ==: 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyM2Y5MDU4ZjZjZTc0YWE3ZjE5NmJiOWYzNTVlYTYzN2IyYThjMGE4NjNmZGY2Q2qiyg==: 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.124 request: 00:22:29.124 { 00:22:29.124 "name": "nvme0", 00:22:29.124 "trtype": "tcp", 00:22:29.124 "traddr": "10.0.0.1", 00:22:29.124 "adrfam": "ipv4", 00:22:29.124 "trsvcid": "4420", 00:22:29.124 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:29.124 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:29.124 "prchk_reftag": false, 00:22:29.124 "prchk_guard": false, 00:22:29.124 "hdgst": false, 00:22:29.124 "ddgst": false, 00:22:29.124 "allow_unrecognized_csi": false, 00:22:29.124 "method": "bdev_nvme_attach_controller", 00:22:29.124 "req_id": 1 00:22:29.124 } 00:22:29.124 Got JSON-RPC error response 00:22:29.124 response: 00:22:29.124 { 00:22:29.124 "code": -5, 00:22:29.124 "message": "Input/output error" 00:22:29.124 } 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.124 request: 00:22:29.124 { 00:22:29.124 "name": "nvme0", 00:22:29.124 "trtype": "tcp", 00:22:29.124 "traddr": "10.0.0.1", 00:22:29.124 "adrfam": "ipv4", 00:22:29.124 "trsvcid": "4420", 00:22:29.124 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:29.124 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:29.124 "prchk_reftag": false, 00:22:29.124 "prchk_guard": false, 00:22:29.124 "hdgst": false, 00:22:29.124 "ddgst": false, 00:22:29.124 "dhchap_key": "key2", 00:22:29.124 "allow_unrecognized_csi": false, 00:22:29.124 "method": "bdev_nvme_attach_controller", 00:22:29.124 "req_id": 1 00:22:29.124 } 00:22:29.124 Got JSON-RPC error response 00:22:29.124 response: 00:22:29.124 { 00:22:29.124 "code": -5, 00:22:29.124 "message": "Input/output error" 00:22:29.124 } 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.124 request: 00:22:29.124 { 00:22:29.124 "name": "nvme0", 00:22:29.124 "trtype": "tcp", 00:22:29.124 "traddr": "10.0.0.1", 00:22:29.124 "adrfam": "ipv4", 00:22:29.124 "trsvcid": "4420", 00:22:29.124 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:29.124 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:29.124 "prchk_reftag": false, 00:22:29.124 "prchk_guard": false, 00:22:29.124 "hdgst": false, 00:22:29.124 "ddgst": false, 00:22:29.124 "dhchap_key": "key1", 00:22:29.124 "dhchap_ctrlr_key": "ckey2", 00:22:29.124 "allow_unrecognized_csi": false, 00:22:29.124 "method": "bdev_nvme_attach_controller", 00:22:29.124 "req_id": 1 00:22:29.124 } 00:22:29.124 Got JSON-RPC error response 00:22:29.124 response: 00:22:29.124 { 00:22:29.124 "code": -5, 00:22:29.124 "message": "Input/output error" 00:22:29.124 } 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:22:29.124 /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh: line 128: get_main_ns_ip: command not found 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # trap - ERR 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # print_backtrace 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1155 -- # args=('--transport=tcp') 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1155 -- # local args 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1157 -- # xtrace_disable 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.124 ========== Backtrace start: ========== 00:22:29.124 00:22:29.124 in /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh:128 -> main(["--transport=tcp"]) 00:22:29.124 ... 00:22:29.124 123 NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t "$TEST_TRANSPORT" -f ipv4 \ 00:22:29.124 124 -a "$NVMF_FIRST_INITIATOR_IP" -s "$NVMF_PORT" -q "$hostnqn" -n "$subnqn" \ 00:22:29.124 125 --dhchap-key "key1" --dhchap-ctrlr-key "ckey2" 00:22:29.124 126 00:22:29.124 127 # Check reauthentication 00:22:29.124 => 128 rpc_cmd bdev_nvme_attach_controller -b nvme0 -t "$TEST_TRANSPORT" -f ipv4 \ 00:22:29.124 129 -a "$(get_main_ns_ip)" -s "$NVMF_PORT" -q "$hostnqn" -n "$subnqn" \ 00:22:29.124 130 --dhchap-key "key1" --dhchap-ctrlr-key "ckey1" --ctrlr-loss-timeout-sec 1 \ 00:22:29.124 131 --reconnect-delay-sec 1 00:22:29.124 132 nvmet_auth_set_key "sha256" "ffdhe2048" 2 00:22:29.124 133 rpc_cmd bdev_nvme_set_keys "nvme0" --dhchap-key "key2" --dhchap-ctrlr-key "ckey2" 00:22:29.124 ... 00:22:29.124 00:22:29.124 ========== Backtrace end ========== 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1194 -- # return 0 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a '' -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.124 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.124 usage: rpc.py [options] bdev_nvme_attach_controller [-h] -b NAME -t TRTYPE -a 00:22:29.124 TRADDR [-f ADRFAM] 00:22:29.124 [-s TRSVCID] [-p PRIORITY] 00:22:29.124 [-n SUBNQN] [-q HOSTNQN] 00:22:29.124 [-i HOSTADDR] 00:22:29.124 [-c HOSTSVCID] [-r] [-g] 00:22:29.124 [-e] [-d] 00:22:29.125 [--fabrics-timeout FABRICS_CONNECT_TIMEOUT_US] 00:22:29.125 [-x MULTIPATH] 00:22:29.125 [--num-io-queues NUM_IO_QUEUES] 00:22:29.125 [-l CTRLR_LOSS_TIMEOUT_SEC] 00:22:29.125 [-o RECONNECT_DELAY_SEC] 00:22:29.125 [-u FAST_IO_FAIL_TIMEOUT_SEC] 00:22:29.125 [-k PSK] [-m MAX_BDEVS] 00:22:29.125 [--dhchap-key DHCHAP_KEY] 00:22:29.125 [--dhchap-ctrlr-key DHCHAP_CTRLR_KEY] 00:22:29.125 [-U] 00:22:29.125 rpc.py [options] bdev_nvme_attach_controller: error: argument -a/--traddr: expected one argument 00:22:29.125 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:29.125 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # trap - ERR 00:22:29.125 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # print_backtrace 00:22:29.125 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:22:29.125 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1155 -- # args=('1' '--reconnect-delay-sec' '1' '--ctrlr-loss-timeout-sec' 'ckey1' '--dhchap-ctrlr-key' 'key1' '--dhchap-key' 'nqn.2024-02.io.spdk:cnode0' '-n' 'nqn.2024-02.io.spdk:host0' '-q' '4420' '-s' '' '-a' 'ipv4' '-f' 'tcp' '-t' 'nvme0' '-b' 'bdev_nvme_attach_controller' '--transport=tcp') 00:22:29.125 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1155 -- # local args 00:22:29.125 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1157 -- # xtrace_disable 00:22:29.125 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.125 ========== Backtrace start: ========== 00:22:29.125 00:22:29.125 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:589 -> rpc_cmd(["bdev_nvme_attach_controller"],["-b"],["nvme0"],["-t"],["tcp"],["-f"],["ipv4"],["-a"],[""],["-s"],["4420"],["-q"],["nqn.2024-02.io.spdk:host0"],["-n"],["nqn.2024-02.io.spdk:cnode0"],["--dhchap-key"],["key1"],["--dhchap-ctrlr-key"],["ckey1"],["--ctrlr-loss-timeout-sec"],["1"],["--reconnect-delay-sec"],["1"]) 00:22:29.125 ... 00:22:29.125 584 echo "$rsp" 00:22:29.125 585 done 00:22:29.125 586 00:22:29.125 587 rc=${!status[*]} 00:22:29.125 588 xtrace_restore 00:22:29.125 => 589 [[ $rc == 0 ]] 00:22:29.125 590 } 00:22:29.125 591 00:22:29.125 592 function rpc_cmd_simple_data_json() { 00:22:29.125 593 00:22:29.125 594 local elems="$1[@]" elem 00:22:29.125 ... 00:22:29.432 in /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh:128 -> main(["--transport=tcp"]) 00:22:29.432 ... 00:22:29.432 123 NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t "$TEST_TRANSPORT" -f ipv4 \ 00:22:29.432 124 -a "$NVMF_FIRST_INITIATOR_IP" -s "$NVMF_PORT" -q "$hostnqn" -n "$subnqn" \ 00:22:29.432 125 --dhchap-key "key1" --dhchap-ctrlr-key "ckey2" 00:22:29.432 126 00:22:29.432 127 # Check reauthentication 00:22:29.432 => 128 rpc_cmd bdev_nvme_attach_controller -b nvme0 -t "$TEST_TRANSPORT" -f ipv4 \ 00:22:29.432 129 -a "$(get_main_ns_ip)" -s "$NVMF_PORT" -q "$hostnqn" -n "$subnqn" \ 00:22:29.432 130 --dhchap-key "key1" --dhchap-ctrlr-key "ckey1" --ctrlr-loss-timeout-sec 1 \ 00:22:29.432 131 --reconnect-delay-sec 1 00:22:29.432 132 nvmet_auth_set_key "sha256" "ffdhe2048" 2 00:22:29.432 133 rpc_cmd bdev_nvme_set_keys "nvme0" --dhchap-key "key2" --dhchap-ctrlr-key "ckey2" 00:22:29.432 ... 00:22:29.432 00:22:29.432 ========== Backtrace end ========== 00:22:29.432 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1194 -- # return 0 00:22:29.432 13:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1 -- # cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:22:29.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.432 [2024-09-27 13:26:56.404552] Starting SPDK v25.01-pre git sha1 71dc0c1e9 / DPDK 24.03.0 initialization... 00:22:29.432 [2024-09-27 13:26:56.404872] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.432 [2024-09-27 13:26:56.547840] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.432 [2024-09-27 13:26:56.621320] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.432 [2024-09-27 13:26:56.621559] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.432 [2024-09-27 13:26:56.621943] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.432 [2024-09-27 13:26:56.622093] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.432 [2024-09-27 13:26:56.622223] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.432 [2024-09-27 13:26:56.622333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.432 [2024-09-27 13:26:56.658214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:29.432 [2024-09-27 13:27:00.166533] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.432 [2024-09-27 13:27:00.166849] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.432 [2024-09-27 13:27:00.167121] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.432 [2024-09-27 13:27:00.167545] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.432 [2024-09-27 13:27:00.167745] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.432 [2024-09-27 13:27:00.168079] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 2 (ffdhe3072) 00:22:29.432 [2024-09-27 13:27:00.168675] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 3 (ffdhe4096) 00:22:29.432 [2024-09-27 13:27:00.168905] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.432 [2024-09-27 13:27:00.169106] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 5 (ffdhe8192) 00:22:29.432 [2024-09-27 13:27:00.169768] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.432 [2024-09-27 13:27:00.170125] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.432 ctrlr pubkey: 00:22:29.432 00000000 47 16 45 5b e1 40 f6 af 05 2f 45 e5 79 09 a7 78 G.E[.@.../E.y..x 00:22:29.432 00000010 0c 76 26 3e 5d 65 01 66 d5 f2 7f a4 10 43 90 29 .v&>]e.f.....C.) 00:22:29.432 00000020 b5 db e5 44 d7 2a 3f 92 b4 1e 56 ad b3 37 c3 25 ...D.*?...V..7.% 00:22:29.433 00000030 82 c4 9d 13 8b 4b eb b3 75 de 68 e9 fa 1d 90 84 .....K..u.h..... 00:22:29.433 00000040 1f 09 cc 04 a5 f6 b4 6b 74 a0 27 99 2d 5f 56 a8 .......kt.'.-_V. 00:22:29.433 00000050 62 62 a9 2f 3c 88 ba ad 79 ba 0f a2 ae 61 25 da bb./<...y....a%. 00:22:29.433 00000060 61 6b 8e 01 f7 30 38 fc 88 35 a5 ba e7 93 0d b7 ak...08..5...... 00:22:29.433 00000070 b4 93 1c a5 c0 d1 63 4d 91 ed 94 7a a5 75 e1 2b ......cM...z.u.+ 00:22:29.433 00000080 30 22 72 cd 16 f0 2d 02 bf 10 64 7d 12 62 21 48 0"r...-...d}.b!H 00:22:29.433 00000090 eb 4a 4a 8c d0 68 1b c9 68 83 06 1a 87 de 74 26 .JJ..h..h.....t& 00:22:29.433 000000a0 59 25 4a b1 d4 13 95 04 38 be cc f2 b7 26 51 8f Y%J.....8....&Q. 00:22:29.433 000000b0 68 6f 76 11 1d e8 9c 47 70 a7 21 f3 59 87 d7 b9 hov....Gp.!.Y... 00:22:29.433 000000c0 d3 91 3d e6 4b f2 d7 3a 7d fc ca fd 49 db 4a 81 ..=.K..:}...I.J. 00:22:29.433 000000d0 bd a3 27 b2 d0 3f d8 84 a7 33 6a e8 31 3c fc a8 ..'..?...3j.1<.. 00:22:29.433 000000e0 7f a2 0e 3c ee 34 53 4d e7 01 1f 76 17 70 ef af ...<.4SM...v.p.. 00:22:29.433 000000f0 24 0e ae 0e 70 5b 37 fc e3 88 76 ec 79 48 94 79 $...p[7...v.yH.y 00:22:29.433 host pubkey: 00:22:29.433 00000000 dc 6d 76 60 b3 32 46 24 6c 7a 8e 8f da b3 ed bd .mv`.2F$lz...... 00:22:29.433 00000010 18 54 52 af a7 64 0b ac 5e 3e ee 4a ce 0f d6 0c .TR..d..^>.J.... 00:22:29.433 00000020 83 08 d9 87 f9 80 0d ac b6 9a d5 b7 80 32 a9 cd .............2.. 00:22:29.433 00000030 fc b7 3b 84 10 18 bb 31 22 e9 3d 14 09 92 20 c5 ..;....1".=... . 00:22:29.433 00000040 e4 27 39 dd d3 15 64 41 d3 d4 af ef 17 d0 58 3d .'9...dA......X= 00:22:29.433 00000050 6a 9b 06 b5 6e 75 c1 82 d2 87 ef 78 95 a4 50 33 j...nu.....x..P3 00:22:29.433 00000060 f8 55 a4 4e ae bb 3a 3b 2c c7 63 95 e7 4f c5 16 .U.N..:;,.c..O.. 00:22:29.433 00000070 3b 81 d7 40 1e fb 7f 7e 92 09 63 fe ee 56 cf 10 ;..@...~..c..V.. 00:22:29.433 00000080 19 a0 c5 a5 54 d7 11 dd 62 9d f3 74 88 a9 55 9e ....T...b..t..U. 00:22:29.433 00000090 b9 a4 37 1d f9 72 14 fe 5e a3 c6 df a6 3e 95 3e ..7..r..^....>.> 00:22:29.433 000000a0 40 9d b0 8b 18 61 e6 c1 cc 79 02 d2 3a c1 27 9e @....a...y..:.'. 00:22:29.433 000000b0 59 6b d7 c4 74 9e 68 7e 8b c2 74 c3 4a 42 2c 50 Yk..t.h~..t.JB,P 00:22:29.433 000000c0 73 7b fd c2 ae fd 39 c7 e9 de fe 9b e6 ce 22 57 s{....9......."W 00:22:29.433 000000d0 2f b0 ef 37 52 dc 92 7f 42 10 8b 2e 5c 03 7d 2b /..7R...B...\.}+ 00:22:29.433 000000e0 86 90 6e e2 4e 67 6d 1f 68 95 40 57 33 0b e1 7d ..n.Ngm.h.@W3..} 00:22:29.433 000000f0 7b 98 97 0d e9 1f dc 9d 41 41 52 0f 3d 3d 81 99 {.......AAR.==.. 00:22:29.433 dh secret: 00:22:29.433 00000000 ec 3e b2 37 af be ea 0e c3 bb 8d e4 d3 81 09 2a .>.7...........* 00:22:29.433 00000010 cc bb d7 00 8c 77 45 90 ea f7 5a 27 13 41 6d ea .....wE...Z'.Am. 00:22:29.433 00000020 df 91 37 4c d2 44 7d 43 65 93 8d 51 3f eb 59 6c ..7L.D}Ce..Q?.Yl 00:22:29.433 00000030 cc 7f d2 58 d0 3a 95 ba cb 6e 32 da ac e0 50 f0 ...X.:...n2...P. 00:22:29.433 00000040 72 9c 83 83 e8 89 dd 95 b0 fc e6 07 66 3e 79 9a r...........f>y. 00:22:29.433 00000050 64 a3 83 02 44 f2 51 24 0a 37 72 b9 63 b8 37 27 d...D.Q$.7r.c.7' 00:22:29.433 00000060 82 f7 07 37 0f 47 9f d2 db 32 16 fe a5 d9 6d 71 ...7.G...2....mq 00:22:29.433 00000070 3f 1f 33 e7 03 ce ee c5 ba 70 ab fa d2 41 0e 4d ?.3......p...A.M 00:22:29.433 00000080 e5 f9 df 8c a6 25 ad 98 43 1e 77 90 30 43 ec cd .....%..C.w.0C.. 00:22:29.433 00000090 6f a6 0a 49 35 15 d7 10 e6 74 85 ca a6 e8 ab bc o..I5....t...... 00:22:29.433 000000a0 e9 0e cc 8e 2d ed f0 62 ba 63 34 e1 f9 79 47 f3 ....-..b.c4..yG. 00:22:29.433 000000b0 fb 46 55 83 cf 82 4a 62 6b d2 f5 d8 4b a7 e7 df .FU...Jbk...K... 00:22:29.433 000000c0 71 7d de 69 7d 45 f0 40 4c fb 37 a7 ae 80 23 29 q}.i}E.@L.7...#) 00:22:29.433 000000d0 5a 26 38 19 49 53 c4 63 0c 5f 96 47 db 2e 5b 17 Z&8.IS.c._.G..[. 00:22:29.433 000000e0 76 13 45 fa 10 f4 2b 29 e4 ff 17 39 5a 0f 94 73 v.E...+)...9Z..s 00:22:29.433 000000f0 9c 75 63 43 0e 77 8d 1e f0 81 6a 60 0b f1 db 3c .ucC.w....j`...< 00:22:29.433 [2024-09-27 13:27:00.182740] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=1, dhgroup=1, seq=3775755171, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.433 [2024-09-27 13:27:00.183561] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.433 [2024-09-27 13:27:00.187693] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.433 [2024-09-27 13:27:00.188099] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.433 [2024-09-27 13:27:00.188432] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.433 [2024-09-27 13:27:00.188920] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.433 [2024-09-27 13:27:00.241508] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.433 [2024-09-27 13:27:00.241931] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.433 [2024-09-27 13:27:00.243074] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.433 [2024-09-27 13:27:00.243289] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.433 [2024-09-27 13:27:00.243484] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 1 (ffdhe2048) 00:22:29.433 [2024-09-27 13:27:00.243759] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.433 [2024-09-27 13:27:00.244073] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 3 (ffdhe4096) 00:22:29.433 [2024-09-27 13:27:00.244226] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.433 [2024-09-27 13:27:00.244318] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 5 (ffdhe8192) 00:22:29.433 [2024-09-27 13:27:00.244435] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.433 [2024-09-27 13:27:00.244642] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.433 ctrlr pubkey: 00:22:29.433 00000000 47 16 45 5b e1 40 f6 af 05 2f 45 e5 79 09 a7 78 G.E[.@.../E.y..x 00:22:29.433 00000010 0c 76 26 3e 5d 65 01 66 d5 f2 7f a4 10 43 90 29 .v&>]e.f.....C.) 00:22:29.433 00000020 b5 db e5 44 d7 2a 3f 92 b4 1e 56 ad b3 37 c3 25 ...D.*?...V..7.% 00:22:29.433 00000030 82 c4 9d 13 8b 4b eb b3 75 de 68 e9 fa 1d 90 84 .....K..u.h..... 00:22:29.433 00000040 1f 09 cc 04 a5 f6 b4 6b 74 a0 27 99 2d 5f 56 a8 .......kt.'.-_V. 00:22:29.433 00000050 62 62 a9 2f 3c 88 ba ad 79 ba 0f a2 ae 61 25 da bb./<...y....a%. 00:22:29.433 00000060 61 6b 8e 01 f7 30 38 fc 88 35 a5 ba e7 93 0d b7 ak...08..5...... 00:22:29.433 00000070 b4 93 1c a5 c0 d1 63 4d 91 ed 94 7a a5 75 e1 2b ......cM...z.u.+ 00:22:29.433 00000080 30 22 72 cd 16 f0 2d 02 bf 10 64 7d 12 62 21 48 0"r...-...d}.b!H 00:22:29.433 00000090 eb 4a 4a 8c d0 68 1b c9 68 83 06 1a 87 de 74 26 .JJ..h..h.....t& 00:22:29.433 000000a0 59 25 4a b1 d4 13 95 04 38 be cc f2 b7 26 51 8f Y%J.....8....&Q. 00:22:29.433 000000b0 68 6f 76 11 1d e8 9c 47 70 a7 21 f3 59 87 d7 b9 hov....Gp.!.Y... 00:22:29.433 000000c0 d3 91 3d e6 4b f2 d7 3a 7d fc ca fd 49 db 4a 81 ..=.K..:}...I.J. 00:22:29.433 000000d0 bd a3 27 b2 d0 3f d8 84 a7 33 6a e8 31 3c fc a8 ..'..?...3j.1<.. 00:22:29.433 000000e0 7f a2 0e 3c ee 34 53 4d e7 01 1f 76 17 70 ef af ...<.4SM...v.p.. 00:22:29.433 000000f0 24 0e ae 0e 70 5b 37 fc e3 88 76 ec 79 48 94 79 $...p[7...v.yH.y 00:22:29.433 host pubkey: 00:22:29.433 00000000 b7 99 f1 c2 3c 4a 49 5d 3e 27 ac aa ff 53 b6 22 ....'...S." 00:22:29.433 00000010 da 0a 9b 80 62 d8 c8 c6 fd b9 d1 4a a5 d3 97 bd ....b......J.... 00:22:29.433 00000020 0d c6 43 1c 95 53 11 73 09 76 90 7c f3 f2 0d 23 ..C..S.s.v.|...# 00:22:29.433 00000030 ac 5b 12 ac 72 c5 10 50 84 39 29 91 2b 76 58 a9 .[..r..P.9).+vX. 00:22:29.433 00000040 22 1d b9 64 5f dc f7 ef 0b 1d 86 8c 82 87 c0 2c "..d_.........., 00:22:29.433 00000050 1e bd 56 13 ff 75 3b 32 79 cd 47 1f 81 54 6e 9f ..V..u;2y.G..Tn. 00:22:29.433 00000060 6a 9e 9c 2d e0 0d ef fc b5 51 71 a2 96 eb 44 33 j..-.....Qq...D3 00:22:29.433 00000070 af 25 af 10 fa 57 a0 c9 06 ad 6d d0 12 bf 68 50 .%...W....m...hP 00:22:29.433 00000080 0d 39 43 5a 8f f1 cc 58 20 17 b3 ba ea e1 1e e6 .9CZ...X ....... 00:22:29.433 00000090 c3 05 8a af 13 c0 38 f9 02 fd 8d d1 1c 41 81 ea ......8......A.. 00:22:29.433 000000a0 f8 80 f6 1c e8 22 ea 6b 97 f8 81 35 2c 41 52 db .....".k...5,AR. 00:22:29.433 000000b0 8d a4 fc aa a7 8e dc 6b cd 12 03 0c 3a fd 2d f8 .......k....:.-. 00:22:29.433 000000c0 72 cf 9f af f2 b2 dc 09 4d 3c 6e 05 e3 91 60 f1 r.......M..kHLq.Ix... 00:22:29.433 000000e0 49 cd f9 45 b6 9b 7f 23 cc d4 50 50 b5 e7 fd a5 I..E...#..PP.... 00:22:29.433 000000f0 25 bc d9 b9 33 a0 b5 6c 3e ea fa 6b 54 ec 6c 80 %...3..l>..kT.l. 00:22:29.433 dh secret: 00:22:29.433 00000000 db e6 39 87 85 fa b6 57 5d 58 17 f2 9f f5 9d 47 ..9....W]X.....G 00:22:29.433 00000010 46 cc c4 7b b9 2b 57 97 fa 6c 8b 96 98 94 79 42 F..{.+W..l....yB 00:22:29.433 00000020 05 2e 95 c5 05 3d 1a 10 a3 4a d1 d5 25 ea df f8 .....=...J..%... 00:22:29.433 00000030 ac ed a1 da 8e 4b 6a a8 a1 90 dc fc bd 2c 76 4a .....Kj......,vJ 00:22:29.433 00000040 e8 82 25 ba 4e 1a 97 92 fb 53 1b f8 0b 03 41 cb ..%.N....S....A. 00:22:29.433 00000050 65 a2 c1 dd f7 4c 37 17 71 c3 88 3b d0 26 e6 78 e....L7.q..;.&.x 00:22:29.433 00000060 9b 14 85 f6 80 d8 e4 42 34 8c 02 b3 0b 86 8f 56 .......B4......V 00:22:29.433 00000070 ec fd 2a ca ea 50 5a cb 81 5d 25 43 ce 46 d9 6a ..*..PZ..]%C.F.j 00:22:29.433 00000080 f6 c0 db 53 c4 d8 6a 00 99 60 df 80 fa 83 f8 62 ...S..j..`.....b 00:22:29.433 00000090 e4 fd c0 50 3d f0 05 79 1a 80 d1 43 39 fb 81 ec ...P=..y...C9... 00:22:29.433 000000a0 9f 8f a6 b9 d9 8e 69 f5 12 16 3b cd 28 fb e2 67 ......i...;.(..g 00:22:29.433 000000b0 7d 9a 34 c2 e3 97 f9 36 41 9e 0e a2 17 e9 34 0b }.4....6A.....4. 00:22:29.433 000000c0 84 dc 45 cf 94 3c 5d b2 48 52 12 e1 f3 2a 66 51 ..E..<].HR...*fQ 00:22:29.433 000000d0 7b 19 98 e4 75 89 2c 89 e3 f3 75 30 e3 81 68 15 {...u.,...u0..h. 00:22:29.433 000000e0 e0 4d 2e 68 62 00 db 42 a4 c2 12 bf 56 2b 82 4c .M.hb..B....V+.L 00:22:29.433 000000f0 58 de ae fa 15 0e ca c4 ec bc ec 91 f4 32 b9 0b X............2.. 00:22:29.433 [2024-09-27 13:27:00.251231] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key1, hash=1, dhgroup=1, seq=3775755172, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.433 [2024-09-27 13:27:00.251578] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.433 [2024-09-27 13:27:00.255826] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.433 [2024-09-27 13:27:00.256353] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.433 [2024-09-27 13:27:00.256652] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.433 [2024-09-27 13:27:00.256834] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.433 [2024-09-27 13:27:00.359585] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.434 [2024-09-27 13:27:00.360061] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.434 [2024-09-27 13:27:00.360250] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.434 [2024-09-27 13:27:00.360382] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.434 [2024-09-27 13:27:00.360620] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.434 ctrlr pubkey: 00:22:29.434 00000000 d4 6b 87 ad 57 94 af 06 fc 2b dd ad 44 98 0e c2 .k..W....+..D... 00:22:29.434 00000010 9e 31 30 8a 9f a3 e1 8e 69 27 7c f1 18 26 77 e8 .10.....i'|..&w. 00:22:29.434 00000020 16 9e 71 a2 dd 13 c2 5e 45 b7 f1 b7 f9 2b bb e8 ..q....^E....+.. 00:22:29.434 00000030 e5 a1 37 fc b8 88 fb 95 f8 75 f0 8c bc 1c 8e 5b ..7......u.....[ 00:22:29.434 00000040 d0 85 fb 9d 4a 65 4f 8c 77 73 e8 07 21 9b 49 de ....JeO.ws..!.I. 00:22:29.434 00000050 5c 00 fe e8 fd ca d4 c7 8c 45 34 e5 ab 3e f9 61 \........E4..>.a 00:22:29.434 00000060 f6 22 c2 b6 0c 2e 68 19 4f d4 09 66 c2 e8 07 58 ."....h.O..f...X 00:22:29.434 00000070 e9 46 35 b6 cb fe 39 87 33 5d 05 d8 99 78 fa bc .F5...9.3]...x.. 00:22:29.434 00000080 df 80 f1 d7 ce d7 37 2b 2f f6 19 40 91 ad 27 ce ......7+/..@..'. 00:22:29.434 00000090 33 5a 29 1d aa 02 3a 64 1e 66 67 8d ec af 12 df 3Z)...:d.fg..... 00:22:29.434 000000a0 16 3c 63 fc e4 f6 e7 2a eb e6 4a d2 09 ac 66 c1 .|.D.-. 00:22:29.434 000000e0 55 7d 26 44 41 b4 d2 10 cc 15 65 ce 9c 05 f7 8b U}&DA.....e..... 00:22:29.434 000000f0 45 6f a9 75 0d 0c d4 80 74 bb 40 72 72 64 f4 2c Eo.u....t.@rrd., 00:22:29.434 host pubkey: 00:22:29.434 00000000 bf fd d7 1c 54 02 96 f1 a8 87 d7 ed 4f e9 99 d0 ....T.......O... 00:22:29.434 00000010 0d f6 f0 76 8d 9a 6f 5a b8 24 11 28 21 9c 32 34 ...v..oZ.$.(!.24 00:22:29.434 00000020 3d 2d 61 b6 ac 26 67 60 d0 e6 62 fc 4a f2 2a 4c =-a..&g`..b.J.*L 00:22:29.434 00000030 5a 42 db fc 3e 5e e0 97 28 0b fe 17 42 ef 50 27 ZB..>^..(...B.P' 00:22:29.434 00000040 c3 ab fd 3a 36 dc 25 53 e2 e0 38 3b 92 f3 ec 03 ...:6.%S..8;.... 00:22:29.434 00000050 69 08 1f c1 62 7d 47 42 35 7d 4e 67 46 4b f6 50 i...b}GB5}NgFK.P 00:22:29.434 00000060 6e 7c 0c a5 11 69 ca bb cc 36 65 3b 04 c9 ca df n|...i...6e;.... 00:22:29.434 00000070 23 5b a3 d9 f9 a0 2e ab d7 54 4a 4f a9 83 29 31 #[.......TJO..)1 00:22:29.434 00000080 8d 20 5f 09 83 30 3b a4 51 82 29 64 c3 f6 3b 17 . _..0;.Q.)d..;. 00:22:29.434 00000090 3a 4b 64 2d 72 08 e9 64 2f 23 1e 7b c3 94 88 02 :Kd-r..d/#.{.... 00:22:29.434 000000a0 89 2f 41 91 8d 3e af 27 29 5e d8 b9 f1 aa dc bd ./A..>.')^...... 00:22:29.434 000000b0 81 e6 11 a8 76 4f d9 33 c1 8d c3 df 72 d4 a8 5e ....vO.3....r..^ 00:22:29.434 000000c0 d7 84 24 8f 54 32 88 dc ec d5 4d 98 e8 09 bc 1c ..$.T2....M..... 00:22:29.434 000000d0 67 6d 45 c4 44 ed 1a 7f fe 7d 1e dc f2 9c 4c 23 gmE.D....}....L# 00:22:29.434 000000e0 28 59 12 27 f9 89 7b e8 3c a9 c8 4c 6d 45 f1 de (Y.'..{.<..LmE.. 00:22:29.434 000000f0 0c dc 04 ed a4 06 f3 8a 46 9c 8b ca 0b dd 77 06 ........F.....w. 00:22:29.434 dh secret: 00:22:29.434 00000000 d3 d4 fd 3c 2f e3 a2 83 67 8c 8e 27 38 e5 bc 0e ...w 00:22:29.434 00000030 0d 14 b7 11 f9 8e 75 99 72 ae 84 38 b1 6a b9 ee ......u.r..8.j.. 00:22:29.434 00000040 4a 94 3a 63 14 bc 63 44 92 11 d3 e6 7f cc 1d c9 J.:c..cD........ 00:22:29.434 00000050 73 dd 6b b4 df 02 b9 d2 c5 e8 4e 3b cd 77 9d 74 s.k.......N;.w.t 00:22:29.434 00000060 98 43 18 3d f6 eb 0b 90 1e 44 cb ce 67 d5 95 b9 .C.=.....D..g... 00:22:29.434 00000070 ea bf 26 3a 42 f5 8a e5 77 d8 e9 4e 1c 68 2e d2 ..&:B...w..N.h.. 00:22:29.434 00000080 0d d3 01 3a 61 8b a6 d2 c1 f6 55 49 9c 56 1f 62 ...:a.....UI.V.b 00:22:29.434 00000090 ec dc 6d fb 5a 23 a5 95 af 4f 39 4d b0 78 f0 06 ..m.Z#...O9M.x.. 00:22:29.434 000000a0 7d 51 63 bc cb d6 83 6b e4 99 09 7d 87 6c e7 99 }Qc....k...}.l.. 00:22:29.434 000000b0 3a 96 12 5a 4f 31 4a 4c 38 83 ab 1d b2 17 78 31 :..ZO1JL8.....x1 00:22:29.434 000000c0 9b fe c6 f2 00 ff be 45 95 fe f2 b3 cf 25 1c ba .......E.....%.. 00:22:29.434 000000d0 45 56 a1 4e 21 bd 5e 70 c2 ce fd 83 67 89 b0 64 EV.N!.^p....g..d 00:22:29.434 000000e0 a3 ec ee 18 82 d7 43 18 46 24 24 0c 76 02 a5 9c ......C.F$$.v... 00:22:29.434 000000f0 ca 29 81 cd 01 9b ff 5f 75 f2 19 f4 9c 1e d4 91 .)....._u....... 00:22:29.434 [2024-09-27 13:27:00.366713] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key0, hash=1, dhgroup=1, seq=3775755173, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.434 [2024-09-27 13:27:00.367064] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.434 [2024-09-27 13:27:00.370915] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.434 [2024-09-27 13:27:00.371323] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.434 [2024-09-27 13:27:00.371457] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.434 [2024-09-27 13:27:00.371706] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.434 [2024-09-27 13:27:00.423477] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.434 [2024-09-27 13:27:00.423798] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.434 [2024-09-27 13:27:00.424041] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 1 (ffdhe2048) 00:22:29.434 [2024-09-27 13:27:00.424268] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.434 [2024-09-27 13:27:00.424526] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.434 ctrlr pubkey: 00:22:29.434 00000000 d4 6b 87 ad 57 94 af 06 fc 2b dd ad 44 98 0e c2 .k..W....+..D... 00:22:29.434 00000010 9e 31 30 8a 9f a3 e1 8e 69 27 7c f1 18 26 77 e8 .10.....i'|..&w. 00:22:29.434 00000020 16 9e 71 a2 dd 13 c2 5e 45 b7 f1 b7 f9 2b bb e8 ..q....^E....+.. 00:22:29.434 00000030 e5 a1 37 fc b8 88 fb 95 f8 75 f0 8c bc 1c 8e 5b ..7......u.....[ 00:22:29.434 00000040 d0 85 fb 9d 4a 65 4f 8c 77 73 e8 07 21 9b 49 de ....JeO.ws..!.I. 00:22:29.434 00000050 5c 00 fe e8 fd ca d4 c7 8c 45 34 e5 ab 3e f9 61 \........E4..>.a 00:22:29.434 00000060 f6 22 c2 b6 0c 2e 68 19 4f d4 09 66 c2 e8 07 58 ."....h.O..f...X 00:22:29.434 00000070 e9 46 35 b6 cb fe 39 87 33 5d 05 d8 99 78 fa bc .F5...9.3]...x.. 00:22:29.434 00000080 df 80 f1 d7 ce d7 37 2b 2f f6 19 40 91 ad 27 ce ......7+/..@..'. 00:22:29.434 00000090 33 5a 29 1d aa 02 3a 64 1e 66 67 8d ec af 12 df 3Z)...:d.fg..... 00:22:29.434 000000a0 16 3c 63 fc e4 f6 e7 2a eb e6 4a d2 09 ac 66 c1 .|.D.-. 00:22:29.434 000000e0 55 7d 26 44 41 b4 d2 10 cc 15 65 ce 9c 05 f7 8b U}&DA.....e..... 00:22:29.434 000000f0 45 6f a9 75 0d 0c d4 80 74 bb 40 72 72 64 f4 2c Eo.u....t.@rrd., 00:22:29.434 host pubkey: 00:22:29.434 00000000 c8 1b 34 19 ae f5 81 14 b4 bb 54 ab b1 ae 68 3a ..4.......T...h: 00:22:29.434 00000010 07 db a5 99 c4 af 5c e3 8f 9e 13 ec ef 62 c2 e6 ......\......b.. 00:22:29.434 00000020 d4 8c f0 c9 4d 6d ce f4 ce f3 27 b6 25 11 76 ae ....Mm....'.%.v. 00:22:29.434 00000030 39 42 5d d4 14 78 2d 75 39 8c 40 92 f9 13 12 2e 9B]..x-u9.@..... 00:22:29.434 00000040 e0 d0 e2 d9 f5 76 75 09 de d6 f0 24 97 56 e5 be .....vu....$.V.. 00:22:29.434 00000050 4b e2 94 29 38 f8 97 ee 3f 32 d4 f4 00 fb 1c 08 K..)8...?2...... 00:22:29.434 00000060 11 99 f3 fd 5d 06 6f 57 99 ae 6e e4 d3 92 b3 36 ....].oW..n....6 00:22:29.434 00000070 4d ff cc b5 ef 05 b8 ce 25 1d ac 05 9a be 98 93 M.......%....... 00:22:29.434 00000080 25 ab ad 53 a7 0a 0f 34 44 f3 43 8a 5a 96 72 3e %..S...4D.C.Z.r> 00:22:29.434 00000090 d4 c1 a9 7b b2 c1 18 58 f4 74 f8 4a c1 47 44 46 ...{...X.t.J.GDF 00:22:29.434 000000a0 cc 1b 14 e8 12 88 1c 92 24 48 9c 45 01 df 99 b0 ........$H.E.... 00:22:29.434 000000b0 3f 0b a5 b5 f4 ce ca 5d f7 ca 0e 2f 2e 2d 14 66 ?......].../.-.f 00:22:29.434 000000c0 34 61 8b 56 73 5f f6 95 8b e4 e5 23 ac be 1f 44 4a.Vs_.....#...D 00:22:29.434 000000d0 97 63 1d cb 7d 95 d0 53 01 71 e6 33 55 61 4b 17 .c..}..S.q.3UaK. 00:22:29.434 000000e0 36 ee b1 a5 22 11 f0 8c ba d2 9a 18 b2 e4 c7 93 6..."........... 00:22:29.434 000000f0 83 ab 70 2c 32 67 36 e3 c4 f5 96 74 fc 5d c8 16 ..p,2g6....t.].. 00:22:29.434 dh secret: 00:22:29.434 00000000 a7 2a b0 34 3b a4 c3 22 a0 79 71 bd 4d 61 71 55 .*.4;..".yq.MaqU 00:22:29.434 00000010 03 8e 64 52 0f 34 e4 b2 27 3b 40 d7 a7 c5 51 8f ..dR.4..';@...Q. 00:22:29.434 00000020 be d4 80 c5 45 19 ce b4 9b a4 c0 49 ef eb 2d cd ....E......I..-. 00:22:29.434 00000030 b5 14 6a cc 71 f8 b4 0f 14 39 4c 62 fe 0e 81 7d ..j.q....9Lb...} 00:22:29.434 00000040 09 a6 a1 c3 a5 23 20 cb 95 be d3 52 d3 d6 94 23 .....# ....R...# 00:22:29.434 00000050 52 25 49 91 f9 4d ab 6b 9d af 53 b7 3d a4 d1 b7 R%I..M.k..S.=... 00:22:29.434 00000060 cb cb f3 71 6f 19 36 56 b5 d7 d9 bf c3 31 01 cc ...qo.6V.....1.. 00:22:29.434 00000070 ef ac 0f 1a ce e7 f6 69 6e 18 cb c5 2c 8f da a8 .......in...,... 00:22:29.434 00000080 41 2a 58 20 dc 08 56 81 aa 26 44 25 a8 f7 5a 32 A*X ..V..&D%..Z2 00:22:29.434 00000090 10 8d a9 df 51 63 48 b7 05 eb af e9 8f 21 0f b5 ....QcH......!.. 00:22:29.434 000000a0 f8 3e 70 2e f0 8f d3 8c 3f aa 75 c6 46 d4 4e de .>p.....?.u.F.N. 00:22:29.434 000000b0 68 29 9a 8d a4 39 fd e0 3d 58 5a 5e 51 19 97 1f h)...9..=XZ^Q... 00:22:29.434 000000c0 11 59 bf b3 9f fc 33 19 d9 67 a4 d3 c6 0d 0d 3d .Y....3..g.....= 00:22:29.434 000000d0 ac 66 8a 79 08 9e 30 73 5c 55 13 1c d6 77 10 62 .f.y..0s\U...w.b 00:22:29.434 000000e0 a1 59 60 21 ac d9 2c 6e ab af 60 32 ff ba e5 10 .Y`!..,n..`2.... 00:22:29.434 000000f0 da 88 30 4e 8c 09 7e 02 12 ec 28 7d 47 d2 47 f5 ..0N..~...(}G.G. 00:22:29.434 [2024-09-27 13:27:00.430613] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key0, hash=1, dhgroup=1, seq=3775755174, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.434 [2024-09-27 13:27:00.431000] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.434 [2024-09-27 13:27:00.434888] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.435 [2024-09-27 13:27:00.435356] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.435 [2024-09-27 13:27:00.435536] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.435 [2024-09-27 13:27:00.435768] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.435 [2024-09-27 13:27:00.530487] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.435 [2024-09-27 13:27:00.530855] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.435 [2024-09-27 13:27:00.531012] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.435 [2024-09-27 13:27:00.531151] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.435 [2024-09-27 13:27:00.531382] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.435 ctrlr pubkey: 00:22:29.435 00000000 77 21 58 88 cc 12 bd 35 24 30 7e 82 5f a7 e1 79 w!X....5$0~._..y 00:22:29.435 00000010 9c 21 72 6f e0 ec 24 0e 83 47 cd 07 8b fb c2 a7 .!ro..$..G...... 00:22:29.435 00000020 d7 08 70 b0 ec c3 d3 35 54 fc b2 ce e9 26 63 6f ..p....5T....&co 00:22:29.435 00000030 74 ff e7 c8 57 6a d2 34 a8 ba aa b2 fe 22 5a 1e t...Wj.4....."Z. 00:22:29.435 00000040 26 ba 94 a2 88 f0 a8 75 78 8d 15 e7 53 ac e5 10 &......ux...S... 00:22:29.435 00000050 3d 77 ac 45 d5 de 5c 04 20 f9 74 09 60 0c ee c1 =w.E..\. .t.`... 00:22:29.435 00000060 e2 1c 97 45 4a 99 ea a7 a5 86 17 37 68 38 17 d2 ...EJ......7h8.. 00:22:29.435 00000070 d4 2a 47 fc b9 66 a5 ad 97 cc 0e 79 f8 ad 72 11 .*G..f.....y..r. 00:22:29.435 00000080 73 32 cf 1a 78 b0 b5 b1 a2 30 25 bb 5f 00 d6 ce s2..x....0%._... 00:22:29.435 00000090 6c d3 3b 41 e1 f5 bc 96 95 88 fd fd ad 67 c3 fe l.;A.........g.. 00:22:29.435 000000a0 fc 6f 29 16 b7 91 a5 3b 59 88 e0 7f 0b d8 56 0a .o)....;Y.....V. 00:22:29.435 000000b0 89 12 a0 81 69 1c 85 4e 43 29 54 e5 30 52 aa bf ....i..NC)T.0R.. 00:22:29.435 000000c0 4a 5a 3d 53 72 b2 34 56 6b 24 1e 09 3a 2c 97 3a JZ=Sr.4Vk$..:,.: 00:22:29.435 000000d0 74 f5 c2 77 39 b2 44 62 b5 58 23 ec 18 34 c2 6c t..w9.Db.X#..4.l 00:22:29.435 000000e0 51 69 4c aa 2b 90 73 33 3b 60 30 0e 09 db 85 7b QiL.+.s3;`0....{ 00:22:29.435 000000f0 4b 55 2b b5 06 a3 6d 61 d2 e5 da 64 f5 44 91 2c KU+...ma...d.D., 00:22:29.435 host pubkey: 00:22:29.435 00000000 a7 05 bd 91 29 37 3c 78 50 af c4 36 51 b5 f9 31 ....)7xY...{g.".P 00:22:29.435 000000f0 24 ce 13 e1 6b a0 17 e8 fe c5 21 27 44 de a6 2d $...k.....!'D..- 00:22:29.435 dh secret: 00:22:29.435 00000000 d8 15 00 ab d4 67 89 66 26 9b 76 ca e6 db 55 80 .....g.f&.v...U. 00:22:29.435 00000010 61 0d d5 1a 50 9c b9 ae bd e3 21 58 7f 88 6c bc a...P.....!X..l. 00:22:29.435 00000020 f5 55 b8 7a 80 b9 f8 67 09 23 2c 19 6f 07 81 dd .U.z...g.#,.o... 00:22:29.435 00000030 c2 fc 3f 08 9d bb 89 71 ab 52 7d f5 de 78 8d 01 ..?....q.R}..x.. 00:22:29.435 00000040 2d 62 e8 00 da dd 0c 2e 58 8e 99 f8 70 c6 94 ef -b......X...p... 00:22:29.435 00000050 bc 62 be c7 0d c0 91 e3 de aa ec 22 2d 0a 68 7c .b........."-.h| 00:22:29.435 00000060 c3 91 66 c5 f4 07 04 7f 88 b5 bf 41 d6 30 33 91 ..f........A.03. 00:22:29.435 00000070 33 02 24 c0 63 a4 2a 95 2b 53 06 af b8 6c ab 74 3.$.c.*.+S...l.t 00:22:29.435 00000080 94 a4 04 6f ad b1 98 8b 09 17 48 75 4d 5a 31 90 ...o......HuMZ1. 00:22:29.435 00000090 e1 3c 05 ea 74 0e 91 ea 05 12 b8 b9 82 3c de 46 .<..t........<.F 00:22:29.435 000000a0 e8 47 d7 77 7c 1d 3c 72 fa d0 7a 82 7e 46 9f 3c .G.w|.T2C...'^l.._ 00:22:29.435 00000050 48 ab 60 57 36 81 2c 67 0d 4e 9b 0e 37 a0 8c 38 H.`W6.,g.N..7..8 00:22:29.435 00000060 e3 8e 84 be b0 de bc 27 c0 a4 41 1b 75 8c da c2 .......'..A.u... 00:22:29.435 00000070 9a dd a6 89 b4 82 4c d3 3d 12 1e 0f 6f 5d f6 e6 ......L.=...o].. 00:22:29.435 00000080 ae 45 38 3f 53 c5 8c 03 bb 0d 13 ab 66 72 4a 67 .E8?S.......frJg 00:22:29.435 00000090 c6 28 72 ad 58 61 b4 ce 26 80 29 9b 90 65 52 8e .(r.Xa..&.)..eR. 00:22:29.435 000000a0 e2 e8 5a 9d 3b ff 92 c8 79 d0 ce ad 94 20 da d0 ..Z.;...y.... .. 00:22:29.435 000000b0 72 08 82 a1 66 a3 9f 75 30 be d4 6b 0e f4 17 33 r...f..u0..k...3 00:22:29.435 000000c0 ab bd 18 7e 32 41 93 1f 56 c3 ae 6f 21 dc b4 57 ...~2A..V..o!..W 00:22:29.435 000000d0 56 3c e8 e6 77 e2 e0 f2 e6 cf 60 f9 b9 ea dd ab V<..w.....`..... 00:22:29.435 000000e0 08 d2 28 03 44 31 07 e3 26 47 1a ab b8 00 cf 05 ..(.D1..&G...... 00:22:29.435 000000f0 eb 71 65 13 59 a5 9f 4d de 86 15 5d 94 5d 36 c1 .qe.Y..M...].]6. 00:22:29.435 dh secret: 00:22:29.435 00000000 92 3b 84 f1 71 9a 61 42 15 55 64 b8 54 1d 43 7c .;..q.aB.Ud.T.C| 00:22:29.435 00000010 3f 47 04 8a cd ec 71 5d b0 64 1d f9 42 9f 20 bf ?G....q].d..B. . 00:22:29.435 00000020 5b ae 9a 43 89 ff 2e 27 c8 07 9a bb b5 d1 78 6d [..C...'......xm 00:22:29.435 00000030 4e 96 b5 e7 61 0a d3 4d 3a 91 f9 1e cb 45 4a 4d N...a..M:....EJM 00:22:29.435 00000040 1c b5 29 68 6d 9d eb 72 50 91 49 e2 31 6d 6a 03 ..)hm..rP.I.1mj. 00:22:29.435 00000050 4c 6c 94 6c d5 09 6f c1 32 0a a4 35 b4 71 fc 05 Ll.l..o.2..5.q.. 00:22:29.435 00000060 56 8e d6 25 45 ae 5c b8 f2 70 15 05 5d af ee 47 V..%E.\..p..]..G 00:22:29.435 00000070 92 4c dd 9e f0 ec 9d 22 1f c0 f6 20 e4 93 b2 6c .L....."... ...l 00:22:29.435 00000080 33 43 c4 96 b3 05 51 9a 96 45 16 6e 8a b7 f9 ec 3C....Q..E.n.... 00:22:29.435 00000090 3d d0 1b 94 08 94 2d 71 c1 2a dc 48 f9 4c b7 6c =.....-q.*.H.L.l 00:22:29.435 000000a0 54 c6 1e 08 c0 d1 f5 6f 6f ac bf fd 7a 55 9d 1d T......oo...zU.. 00:22:29.435 000000b0 7d 3d 92 c5 90 01 ec 1c 2a 84 96 a7 dc 02 69 88 }=......*.....i. 00:22:29.435 000000c0 fe 58 70 b0 ab 3b b0 42 c3 a8 73 49 2b ce 3e de .Xp..;.B..sI+.>. 00:22:29.435 000000d0 0a aa a5 3b 17 34 d8 5f 63 0d d8 41 0a 1b 6b 77 ...;.4._c..A..kw 00:22:29.435 000000e0 f9 67 5e 45 03 7a de b9 76 f1 c4 e1 78 3f 8e 1d .g^E.z..v...x?.. 00:22:29.435 000000f0 5d a3 9e fd fa 79 2e 5c cb 6c 75 16 1c 4d 63 04 ]....y.\.lu..Mc. 00:22:29.435 [2024-09-27 13:27:00.604451] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key1, hash=1, dhgroup=1, seq=3775755176, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.435 [2024-09-27 13:27:00.604846] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.436 [2024-09-27 13:27:00.609516] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.436 [2024-09-27 13:27:00.609910] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.436 [2024-09-27 13:27:00.610123] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.436 [2024-09-27 13:27:00.610304] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.436 [2024-09-27 13:27:00.705622] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.436 [2024-09-27 13:27:00.705788] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.436 [2024-09-27 13:27:00.706075] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.436 [2024-09-27 13:27:00.706226] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.436 [2024-09-27 13:27:00.706716] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.436 ctrlr pubkey: 00:22:29.436 00000000 96 68 c3 75 42 c9 47 a1 43 5b 70 83 a6 d8 70 fa .h.uB.G.C[p...p. 00:22:29.436 00000010 ac 51 60 41 9a c7 45 58 3b 42 74 cb 1b 05 02 34 .Q`A..EX;Bt....4 00:22:29.436 00000020 d2 11 6c 07 ff 43 a1 fe b3 10 2f bb b4 42 bb b5 ..l..C..../..B.. 00:22:29.436 00000030 1f cd bc 09 7b 38 a1 5c 22 15 31 63 d4 0e 83 f9 ....{8.\".1c.... 00:22:29.436 00000040 8f 63 3e b8 f5 8f b6 df 38 bd 5a 7a 0d 24 9e 3e .c>.....8.Zz.$.> 00:22:29.436 00000050 80 75 61 32 33 f7 fe 09 bd c9 75 c8 48 b8 0f 36 .ua23.....u.H..6 00:22:29.436 00000060 4c 43 5e 23 85 a4 6e f1 14 70 6c 83 57 07 09 c8 LC^#..n..pl.W... 00:22:29.436 00000070 b0 12 5f ab ba 46 57 38 93 70 d1 f4 a5 ac 5c 7d .._..FW8.p....\} 00:22:29.436 00000080 25 c8 2e 83 0f 9a 7a 64 86 b8 3b 87 10 dc aa 67 %.....zd..;....g 00:22:29.436 00000090 8e 40 56 74 95 88 2d ab 69 96 cc e4 b9 e2 1f ad .@Vt..-.i....... 00:22:29.436 000000a0 29 c2 7f c3 ca d5 bc 42 f6 45 11 45 ec a5 99 8f )......B.E.E.... 00:22:29.436 000000b0 50 71 00 70 6d cf 3e fb a4 7c 31 28 ef dc bb ee Pq.pm.>..|1(.... 00:22:29.436 000000c0 93 96 a0 be 82 37 40 2f 48 ee e6 03 df 7b 5f c8 .....7@/H....{_. 00:22:29.436 000000d0 a8 a6 ad 82 8b 4e 86 03 2a 69 36 76 2e b1 71 73 .....N..*i6v..qs 00:22:29.436 000000e0 a1 18 b8 3d 68 73 40 f0 1a 9e 9d 4a 7a d7 64 b3 ...=hs@....Jz.d. 00:22:29.436 000000f0 ea ea a6 7b 7f b5 fb 10 f8 6d 88 bd 49 41 ec 1a ...{.....m..IA.. 00:22:29.436 host pubkey: 00:22:29.436 00000000 44 57 da f4 f7 81 7f 74 30 fb a6 34 21 5a b0 73 DW.....t0..4!Z.s 00:22:29.436 00000010 5a 89 23 f9 2a d8 49 b3 56 29 ea 01 33 dc 6f d1 Z.#.*.I.V)..3.o. 00:22:29.436 00000020 6e c4 29 69 68 61 5f fa 32 ad a2 23 1c 82 77 2e n.)iha_.2..#..w. 00:22:29.436 00000030 d6 2e 2d de 9a ed c4 aa 57 ea b2 a9 41 a3 3e 28 ..-.....W...A.>( 00:22:29.436 00000040 7e 5a 1c 5c c3 0a 44 01 80 49 55 09 8a d0 13 96 ~Z.\..D..IU..... 00:22:29.436 00000050 1e 1e fb 86 4c c7 cc b1 9f 65 5e 81 c6 25 0b ae ....L....e^..%.. 00:22:29.436 00000060 2c 37 ab d0 fa fb 79 41 8d 88 d6 d8 16 21 e5 d9 ,7....yA.....!.. 00:22:29.436 00000070 5d 79 8f a9 dc 56 8a 40 c1 b3 02 62 7c c7 2d fd ]y...V.@...b|.-. 00:22:29.436 00000080 e1 a9 19 fe 39 c4 0e a6 8a dc 3d a5 05 e9 01 fe ....9.....=..... 00:22:29.436 00000090 f8 46 ae 51 ad e6 3e e5 26 0d 43 91 f2 45 a7 e3 .F.Q..>.&.C..E.. 00:22:29.436 000000a0 d9 1f 67 73 91 c6 7d 77 f8 d2 c0 6f 41 fb 72 17 ..gs..}w...oA.r. 00:22:29.436 000000b0 33 98 96 11 2e 1f fe 93 da 6a 54 2b f4 19 49 34 3........jT+..I4 00:22:29.436 000000c0 c1 04 a8 ab e3 25 51 a4 c7 a7 df f2 5e 4f 54 e6 .....%Q.....^OT. 00:22:29.436 000000d0 69 16 49 c8 8f 81 ea f1 89 9a 94 68 ff 59 98 63 i.I........h.Y.c 00:22:29.436 000000e0 30 5a a6 96 f3 4e cf e4 10 82 6e 08 cc b4 89 58 0Z...N....n....X 00:22:29.436 000000f0 48 ac 2d 65 d2 8a 8c 78 21 75 87 5d 2f 78 e3 8b H.-e...x!u.]/x.. 00:22:29.436 dh secret: 00:22:29.436 00000000 1b 03 37 97 49 66 b3 f3 b2 a5 73 98 63 91 63 d3 ..7.If....s.c.c. 00:22:29.436 00000010 bd c1 cd 11 07 6d 19 e8 b8 fe 4d cf 5c 1e 99 d1 .....m....M.\... 00:22:29.436 00000020 37 60 ad ef 34 77 d5 ff 97 b1 ff 32 95 06 9b c5 7`..4w.....2.... 00:22:29.436 00000030 25 ea 4f 13 4b c1 9e 8f 81 89 b4 71 fa 06 f1 99 %.O.K......q.... 00:22:29.436 00000040 b3 d4 48 31 61 c5 be e9 b6 8f 62 3d c4 8a 76 a7 ..H1a.....b=..v. 00:22:29.436 00000050 e2 1d ab f9 8b 9c da 8a 94 67 0c c1 a0 a6 fe de .........g...... 00:22:29.436 00000060 2e f3 a5 76 0e fd e3 94 99 78 5e 09 47 e3 06 32 ...v.....x^.G..2 00:22:29.436 00000070 e7 19 ca 5d c8 c7 1b c4 73 21 44 19 ff b3 4c 93 ...]....s!D...L. 00:22:29.436 00000080 a8 92 da 47 57 61 ad 36 62 b1 92 94 1d 73 d7 a5 ...GWa.6b....s.. 00:22:29.436 00000090 05 64 13 fb 59 b8 03 90 0a 63 b5 35 0e 58 60 0b .d..Y....c.5.X`. 00:22:29.436 000000a0 a8 ba 9b c4 26 be 3f a0 e8 64 de 5a 06 f2 b2 5b ....&.?..d.Z...[ 00:22:29.436 000000b0 b0 9d b5 1f 7b a7 1c b0 af aa 04 d9 70 d9 9f b0 ....{.......p... 00:22:29.436 000000c0 3e dc 4e a3 67 21 44 77 48 54 9d 2f 9f 41 ee 5e >.N.g!DwHT./.A.^ 00:22:29.436 000000d0 06 10 aa 1d 1e 49 6e 1b ff 6a e1 31 b7 f4 a0 77 .....In..j.1...w 00:22:29.436 000000e0 91 1d 67 ba f9 15 fd 4e 1a 62 c1 36 ea 99 de 66 ..g....N.b.6...f 00:22:29.436 000000f0 e0 cf 33 df 04 f8 37 0f c4 0b fb 5e 43 71 27 6f ..3...7....^Cq'o 00:22:29.436 [2024-09-27 13:27:00.713702] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=1, dhgroup=1, seq=3775755177, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.436 [2024-09-27 13:27:00.714031] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.436 [2024-09-27 13:27:00.718311] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.436 [2024-09-27 13:27:00.718614] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.436 [2024-09-27 13:27:00.718804] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.436 [2024-09-27 13:27:00.719054] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.436 [2024-09-27 13:27:00.770974] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.436 [2024-09-27 13:27:00.771158] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.436 [2024-09-27 13:27:00.771265] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 1 (ffdhe2048) 00:22:29.436 [2024-09-27 13:27:00.771399] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.436 [2024-09-27 13:27:00.771815] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.436 ctrlr pubkey: 00:22:29.436 00000000 96 68 c3 75 42 c9 47 a1 43 5b 70 83 a6 d8 70 fa .h.uB.G.C[p...p. 00:22:29.436 00000010 ac 51 60 41 9a c7 45 58 3b 42 74 cb 1b 05 02 34 .Q`A..EX;Bt....4 00:22:29.436 00000020 d2 11 6c 07 ff 43 a1 fe b3 10 2f bb b4 42 bb b5 ..l..C..../..B.. 00:22:29.436 00000030 1f cd bc 09 7b 38 a1 5c 22 15 31 63 d4 0e 83 f9 ....{8.\".1c.... 00:22:29.436 00000040 8f 63 3e b8 f5 8f b6 df 38 bd 5a 7a 0d 24 9e 3e .c>.....8.Zz.$.> 00:22:29.436 00000050 80 75 61 32 33 f7 fe 09 bd c9 75 c8 48 b8 0f 36 .ua23.....u.H..6 00:22:29.436 00000060 4c 43 5e 23 85 a4 6e f1 14 70 6c 83 57 07 09 c8 LC^#..n..pl.W... 00:22:29.436 00000070 b0 12 5f ab ba 46 57 38 93 70 d1 f4 a5 ac 5c 7d .._..FW8.p....\} 00:22:29.436 00000080 25 c8 2e 83 0f 9a 7a 64 86 b8 3b 87 10 dc aa 67 %.....zd..;....g 00:22:29.436 00000090 8e 40 56 74 95 88 2d ab 69 96 cc e4 b9 e2 1f ad .@Vt..-.i....... 00:22:29.436 000000a0 29 c2 7f c3 ca d5 bc 42 f6 45 11 45 ec a5 99 8f )......B.E.E.... 00:22:29.436 000000b0 50 71 00 70 6d cf 3e fb a4 7c 31 28 ef dc bb ee Pq.pm.>..|1(.... 00:22:29.436 000000c0 93 96 a0 be 82 37 40 2f 48 ee e6 03 df 7b 5f c8 .....7@/H....{_. 00:22:29.436 000000d0 a8 a6 ad 82 8b 4e 86 03 2a 69 36 76 2e b1 71 73 .....N..*i6v..qs 00:22:29.436 000000e0 a1 18 b8 3d 68 73 40 f0 1a 9e 9d 4a 7a d7 64 b3 ...=hs@....Jz.d. 00:22:29.436 000000f0 ea ea a6 7b 7f b5 fb 10 f8 6d 88 bd 49 41 ec 1a ...{.....m..IA.. 00:22:29.436 host pubkey: 00:22:29.436 00000000 71 5e 6d 48 29 47 50 4a 76 b9 b5 08 6a 2e e8 df q^mH)GPJv...j... 00:22:29.436 00000010 8e d2 f8 e0 db 33 fd ab f6 ec 05 90 a7 64 16 c1 .....3.......d.. 00:22:29.436 00000020 1e 67 26 a9 85 d7 b1 64 0e d2 db b1 bf a0 4d 92 .g&....d......M. 00:22:29.436 00000030 3b 79 de cb ab 90 d4 e7 04 89 6a af f0 2b 4b 58 ;y........j..+KX 00:22:29.436 00000040 a6 79 f2 e3 6a 6e a0 c9 37 cb ca f7 e8 aa 39 79 .y..jn..7.....9y 00:22:29.436 00000050 8a 25 b8 3c 3b 11 23 1d 02 b1 ba 17 3d 07 be 83 .%.<;.#.....=... 00:22:29.436 00000060 c0 8d 0b 7d 9a e5 5e 83 5b a5 dc ad d9 43 78 e4 ...}..^.[....Cx. 00:22:29.436 00000070 e2 a0 a7 ba 48 78 c2 73 60 86 58 c7 03 5a 65 a9 ....Hx.s`.X..Ze. 00:22:29.436 00000080 fd ff 00 a0 9d dc bf 24 29 d3 b8 bf d6 1e b9 ce .......$)....... 00:22:29.436 00000090 eb 4b 2a 8c fe 05 c9 4b ba d3 84 1f 99 20 a6 8c .K*....K..... .. 00:22:29.436 000000a0 d2 f1 6b 6d d4 88 7f ee 02 33 92 4c 27 6e 05 f8 ..km.....3.L'n.. 00:22:29.436 000000b0 16 a9 fb 11 89 3c 68 3c e9 e5 fe 12 41 7a 24 57 .......}...c.B...'Ds 00:22:29.436 00000010 a0 3d 08 ca 15 d6 03 5f d7 9f f9 6e f8 d7 4b a2 .=....._...n..K. 00:22:29.436 00000020 5f 91 a2 ed de cb 94 f7 9d fd ef 8d 66 68 fc 71 _...........fh.q 00:22:29.436 00000030 cc 71 ca 04 1d d3 20 f7 71 c3 5c 40 6e c2 51 75 .q.... .q.\@n.Qu 00:22:29.436 00000040 d9 c9 fd d7 32 dc 09 04 9d 86 3c 18 de 61 1d 46 ....2.....<..a.F 00:22:29.436 00000050 5e fd 57 94 77 f7 3d 58 06 72 69 76 2a 69 6d 9d ^.W.w.=X.riv*im. 00:22:29.436 00000060 bb 70 e9 4d 2c 91 45 7b 29 0f f1 68 30 01 0c dc .p.M,.E{)..h0... 00:22:29.436 00000070 5f cd 35 f9 d6 ad 69 68 cc bd 3c 5e 22 c9 25 96 _.5...ih..<^".%. 00:22:29.436 00000080 d9 4f 16 53 9e b1 7a a6 a1 b6 70 d3 79 53 c7 ea .O.S..z...p.yS.. 00:22:29.436 00000090 7c 96 b8 4e c8 9f b2 e7 ae 5e 02 23 ca f0 15 01 |..N.....^.#.... 00:22:29.436 000000a0 48 8e 78 f8 1b 01 d2 23 f4 61 1f 16 30 2e 32 e1 H.x....#.a..0.2. 00:22:29.436 000000b0 32 a6 7f 90 12 e4 cb 86 ea cb 6a b7 a9 71 11 e2 2.........j..q.. 00:22:29.436 000000c0 b2 a6 6f 6f 4b 76 61 e5 c6 15 7c 37 40 fd 73 d6 ..ooKva...|7@.s. 00:22:29.436 000000d0 81 ff 34 8b c2 24 5c b4 e7 c4 ed 62 f6 33 d4 9a ..4..$\....b.3.. 00:22:29.436 000000e0 c2 4e 63 7d 13 50 9a 5c 3b 72 f0 b4 0b a2 ba a1 .Nc}.P.\;r...... 00:22:29.436 000000f0 f6 d4 79 cb 3a b5 66 8c 39 39 4c ff 31 2b 1c 5c ..y.:.f.99L.1+.\ 00:22:29.436 [2024-09-27 13:27:00.779525] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key2, hash=1, dhgroup=1, seq=3775755178, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.437 [2024-09-27 13:27:00.779951] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.437 [2024-09-27 13:27:00.785365] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.437 [2024-09-27 13:27:00.785766] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.437 [2024-09-27 13:27:00.785978] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.437 [2024-09-27 13:27:00.786249] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.437 [2024-09-27 13:27:00.886476] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.437 [2024-09-27 13:27:00.886774] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.437 [2024-09-27 13:27:00.886938] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.437 [2024-09-27 13:27:00.887069] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.437 [2024-09-27 13:27:00.887404] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.437 ctrlr pubkey: 00:22:29.437 00000000 9d 5b 56 e3 a8 37 e5 4d 25 00 dc 05 81 4d f1 55 .[V..7.M%....M.U 00:22:29.437 00000010 8a 25 70 d1 07 c1 15 42 a0 1c 15 e9 cc e6 7f 83 .%p....B........ 00:22:29.437 00000020 d0 09 df 55 f1 de 41 dc 86 56 a5 bb 31 41 e2 8d ...U..A..V..1A.. 00:22:29.437 00000030 68 9f 9b 48 f8 06 56 73 20 0d 38 a2 d9 b2 08 99 h..H..Vs .8..... 00:22:29.437 00000040 d3 0d 6d e2 2d 56 8d 97 3b a1 d6 a6 8f 7f 33 4b ..m.-V..;.....3K 00:22:29.437 00000050 07 fe e7 bd ee 53 f7 1b b0 61 c0 6b 68 a3 f6 c2 .....S...a.kh... 00:22:29.437 00000060 bb f5 da 40 68 cc bf 68 1e ae 39 d6 95 45 b9 a0 ...@h..h..9..E.. 00:22:29.437 00000070 f6 d1 46 da 27 20 df 4d 10 7d ba e9 ec 18 24 04 ..F.' .M.}....$. 00:22:29.437 00000080 09 06 9c 4b c5 e7 5d 75 f9 c3 75 1e b7 de e4 45 ...K..]u..u....E 00:22:29.437 00000090 c1 cc 68 24 78 4e 89 4e 6b 4b 5c 5c 06 a9 e6 dc ..h$xN.NkK\\.... 00:22:29.437 000000a0 d2 a3 dd 8b 97 60 e0 9e dc 7e 3f 7d 29 9a aa f9 .....`...~?})... 00:22:29.437 000000b0 a2 b1 d9 aa 5b a5 aa fd 36 48 aa ce a6 43 52 f6 ....[...6H...CR. 00:22:29.437 000000c0 1f 32 4f d9 ac f8 df bc cf bc e8 44 c4 9f 27 24 .2O........D..'$ 00:22:29.437 000000d0 33 55 b1 b6 72 ae 0f 0b 55 04 01 c4 a3 dd 0e 8c 3U..r...U....... 00:22:29.437 000000e0 b9 4a af 9e 52 02 d5 a4 e1 e7 23 bc 4f 1f ef ab .J..R.....#.O... 00:22:29.437 000000f0 4a 04 ad 38 66 e5 2b 3d 68 85 cc a7 b4 9c c1 0d J..8f.+=h....... 00:22:29.437 host pubkey: 00:22:29.437 00000000 a0 5f 7f 64 ac c2 55 e7 83 fa e3 bc 27 4f d5 c2 ._.d..U.....'O.. 00:22:29.437 00000010 c1 aa 82 80 5e ac 06 bb a8 93 c1 05 7b 10 38 e3 ....^.......{.8. 00:22:29.437 00000020 dd 52 c6 0a 14 4c e4 15 0f 80 7f aa 25 c4 60 98 .R...L......%.`. 00:22:29.437 00000030 c7 d9 ad 10 d5 b4 ce 78 7e 9d 13 14 98 da 8f cf .......x~....... 00:22:29.437 00000040 48 3d 30 84 d6 96 ee 8e 74 74 e0 b9 be 9d 46 2b H=0.....tt....F+ 00:22:29.437 00000050 a2 a5 ce f3 0d fa d7 22 38 35 9a a9 f3 53 bf c6 ......."85...S.. 00:22:29.437 00000060 ce b6 08 05 28 67 3d 22 a5 83 11 d3 76 9a e3 af ....(g="....v... 00:22:29.437 00000070 3c c9 b8 ef b1 3e 0c a4 1e fd 7f a3 e4 ed 82 50 <....>.........P 00:22:29.437 00000080 51 7e 6a 13 48 a6 d6 53 3e fc 88 fc c9 59 93 96 Q~j.H..S>....Y.. 00:22:29.437 00000090 5f 98 1a 41 40 c5 7d 1f 2c 04 52 14 85 dd 6d 18 _..A@.}.,.R...m. 00:22:29.437 000000a0 24 68 7d 52 1a 66 5f a2 a1 db 25 aa cd 30 ba 36 $h}R.f_...%..0.6 00:22:29.437 000000b0 02 2d 42 55 04 3e 98 f8 7e d1 0e f2 a6 17 0e 95 .-BU.>..~....... 00:22:29.437 000000c0 6f ea 01 e3 c5 2a f1 46 33 10 b6 6a bb 23 60 bb o....*.F3..j.#`. 00:22:29.437 000000d0 d3 d0 7a 83 ee 1b 45 7d 14 c5 ce 9d 91 dc b6 d4 ..z...E}........ 00:22:29.437 000000e0 3b 0f 17 0c 84 e7 8a 99 bb 2d 2a 6d e1 b2 57 92 ;........-*m..W. 00:22:29.437 000000f0 e2 a8 0c 84 ff a4 e4 47 83 bc e5 76 d0 8c 27 e7 .......G...v..'. 00:22:29.437 dh secret: 00:22:29.437 00000000 64 3a 2b f0 b6 ec b7 c6 49 78 a0 64 f8 f6 0f 2c d:+.....Ix.d..., 00:22:29.437 00000010 0c 47 12 49 90 ee dd d3 34 4a e6 8b 27 85 3c e6 .G.I....4J..'.<. 00:22:29.437 00000020 4d 31 a2 73 01 1c 8a 27 cf d1 3d 73 29 fd c2 b3 M1.s...'..=s)... 00:22:29.437 00000030 fa c7 8b 5e 43 17 76 e5 6f b3 8a 1c 9a 34 b3 7d ...^C.v.o....4.} 00:22:29.437 00000040 de e5 0d ad 89 07 50 66 48 49 ee 3d 27 4f c7 6b ......PfHI.='O.k 00:22:29.437 00000050 28 d2 bb 07 d9 95 69 96 f8 45 0c 84 05 53 d2 5d (.....i..E...S.] 00:22:29.437 00000060 38 c2 0c 89 7c de ff 1e c7 ae 51 aa 7a c8 c5 38 8...|.....Q.z..8 00:22:29.437 00000070 ab ae 7c 80 38 f9 25 48 16 a2 79 e8 c4 d4 75 60 ..|.8.%H..y...u` 00:22:29.437 00000080 ea 42 d7 05 ff b1 40 83 a9 a6 bb a2 60 0b be 27 .B....@.....`..' 00:22:29.437 00000090 ae 4e 56 20 07 d2 cf 44 87 82 10 9b d5 9f 85 22 .NV ...D......." 00:22:29.437 000000a0 ed 88 c1 3b 51 2f 96 d1 0c 0d 48 c6 4c ea 47 fa ...;Q/....H.L.G. 00:22:29.437 000000b0 1d b8 7e be 3c ba e4 0c d3 da 3e 54 4a 66 89 2d ..~.<.....>TJf.- 00:22:29.437 000000c0 44 b2 92 35 e5 c1 07 e9 79 21 e3 2d 5e d7 31 9d D..5....y!.-^.1. 00:22:29.437 000000d0 17 54 96 1f 50 fa a4 53 d0 26 84 62 3e cc ec b5 .T..P..S.&.b>... 00:22:29.437 000000e0 2f 58 95 65 c5 ec d4 cb cc 93 10 37 c5 7c 2f 27 /X.e.......7.|/' 00:22:29.437 000000f0 05 15 fb 61 eb 77 a9 a0 6a b1 04 e2 30 a5 79 3f ...a.w..j...0.y? 00:22:29.437 [2024-09-27 13:27:00.896204] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key3, hash=1, dhgroup=1, seq=3775755179, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.437 [2024-09-27 13:27:00.896652] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.437 [2024-09-27 13:27:00.902906] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.437 [2024-09-27 13:27:00.903366] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.437 [2024-09-27 13:27:00.903547] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.437 [2024-09-27 13:27:00.903866] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.437 [2024-09-27 13:27:00.956903] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.437 [2024-09-27 13:27:00.957109] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.437 [2024-09-27 13:27:00.957254] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 1 (ffdhe2048) 00:22:29.437 [2024-09-27 13:27:00.957643] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.437 [2024-09-27 13:27:00.957988] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.437 ctrlr pubkey: 00:22:29.437 00000000 9d 5b 56 e3 a8 37 e5 4d 25 00 dc 05 81 4d f1 55 .[V..7.M%....M.U 00:22:29.437 00000010 8a 25 70 d1 07 c1 15 42 a0 1c 15 e9 cc e6 7f 83 .%p....B........ 00:22:29.437 00000020 d0 09 df 55 f1 de 41 dc 86 56 a5 bb 31 41 e2 8d ...U..A..V..1A.. 00:22:29.437 00000030 68 9f 9b 48 f8 06 56 73 20 0d 38 a2 d9 b2 08 99 h..H..Vs .8..... 00:22:29.437 00000040 d3 0d 6d e2 2d 56 8d 97 3b a1 d6 a6 8f 7f 33 4b ..m.-V..;.....3K 00:22:29.437 00000050 07 fe e7 bd ee 53 f7 1b b0 61 c0 6b 68 a3 f6 c2 .....S...a.kh... 00:22:29.437 00000060 bb f5 da 40 68 cc bf 68 1e ae 39 d6 95 45 b9 a0 ...@h..h..9..E.. 00:22:29.437 00000070 f6 d1 46 da 27 20 df 4d 10 7d ba e9 ec 18 24 04 ..F.' .M.}....$. 00:22:29.437 00000080 09 06 9c 4b c5 e7 5d 75 f9 c3 75 1e b7 de e4 45 ...K..]u..u....E 00:22:29.437 00000090 c1 cc 68 24 78 4e 89 4e 6b 4b 5c 5c 06 a9 e6 dc ..h$xN.NkK\\.... 00:22:29.437 000000a0 d2 a3 dd 8b 97 60 e0 9e dc 7e 3f 7d 29 9a aa f9 .....`...~?})... 00:22:29.437 000000b0 a2 b1 d9 aa 5b a5 aa fd 36 48 aa ce a6 43 52 f6 ....[...6H...CR. 00:22:29.437 000000c0 1f 32 4f d9 ac f8 df bc cf bc e8 44 c4 9f 27 24 .2O........D..'$ 00:22:29.437 000000d0 33 55 b1 b6 72 ae 0f 0b 55 04 01 c4 a3 dd 0e 8c 3U..r...U....... 00:22:29.437 000000e0 b9 4a af 9e 52 02 d5 a4 e1 e7 23 bc 4f 1f ef ab .J..R.....#.O... 00:22:29.437 000000f0 4a 04 ad 38 66 e5 2b 3d 68 85 cc a7 b4 9c c1 0d J..8f.+=h....... 00:22:29.437 host pubkey: 00:22:29.437 00000000 fe d3 ad bc 22 75 06 37 28 b0 6c a2 0c 1d 8c 9d ...."u.7(.l..... 00:22:29.437 00000010 9e 79 dd 6f 87 3b aa 7f c6 40 90 38 61 8a c1 8a .y.o.;...@.8a... 00:22:29.437 00000020 4d 63 0c 87 84 93 59 3b 31 c4 d9 f2 2a a2 ee d0 Mc....Y;1...*... 00:22:29.437 00000030 1c 7a 86 6b 21 d5 0e a0 63 3e 0f 4e fb 06 e5 26 .z.k!...c>.N...& 00:22:29.437 00000040 5a 37 6b 81 60 b4 8a 04 42 2f d6 f2 49 38 1b 69 Z7k.`...B/..I8.i 00:22:29.437 00000050 d8 51 3a 18 ec a3 83 1e a8 70 fd c2 cd 6e 87 fa .Q:......p...n.. 00:22:29.437 00000060 f8 5d eb 49 d9 cf 45 4b d8 48 a8 46 2f e1 5a fd .].I..EK.H.F/.Z. 00:22:29.437 00000070 28 7e 9c 6a b2 66 37 8f 6b 59 ec ab c1 97 1b 6d (~.j.f7.kY.....m 00:22:29.437 00000080 94 de 74 9f 8c 25 6c 9b 08 7e 3f aa fe 67 e0 c9 ..t..%l..~?..g.. 00:22:29.437 00000090 c5 81 06 c2 e4 3a 92 0d 63 58 d0 2b 07 d8 fc 97 .....:..cX.+.... 00:22:29.437 000000a0 4d 5d 1e ae 1b 9e 8e 35 87 a6 2f 9e dc 6d 14 0a M].....5../..m.. 00:22:29.438 000000b0 04 f3 fd fa c3 fc 3f ce e5 27 52 a9 02 08 f2 41 ......?..'R....A 00:22:29.438 000000c0 82 5e af 70 68 f8 8c 77 c3 59 c0 83 aa e6 38 9d .^.ph..w.Y....8. 00:22:29.438 000000d0 21 82 f3 2a cb e8 2c fe 0a f3 f8 ea ae 84 8e d1 !..*..,......... 00:22:29.438 000000e0 9b 5f 70 b0 21 48 31 0f ff c7 18 c2 d3 7e 34 1a ._p.!H1......~4. 00:22:29.438 000000f0 a9 8c 47 1e c9 c0 46 2a bf 64 f1 1e 69 61 c8 8d ..G...F*.d..ia.. 00:22:29.438 dh secret: 00:22:29.438 00000000 f3 70 61 96 61 00 61 e2 85 9d 95 fb f4 33 b3 9d .pa.a.a......3.. 00:22:29.438 00000010 a1 6e 9f 8d ad f5 96 74 1b e4 70 65 36 2a 39 98 .n.....t..pe6*9. 00:22:29.438 00000020 c0 23 9d ba fc a9 ae f5 5e 38 f2 b4 0d 48 80 67 .#......^8...H.g 00:22:29.438 00000030 04 06 f5 c2 f3 0d a6 8c 63 d4 4a 0c 99 13 9a 27 ........c.J....' 00:22:29.438 00000040 cf d1 f3 03 f2 02 f7 eb 45 03 ae b5 0e 6e d7 6b ........E....n.k 00:22:29.438 00000050 1a d0 f9 a3 0b 6e 4b 87 a9 bc f4 d0 e8 ef 74 5f .....nK.......t_ 00:22:29.438 00000060 66 c4 6c e5 e3 6e 6d 79 f6 be 26 f3 01 6a ee 4a f.l..nmy..&..j.J 00:22:29.438 00000070 84 94 4e 5a 5b 09 69 06 fd 61 e2 7e 54 7c 15 3c ..NZ[.i..a.~T|.< 00:22:29.438 00000080 fd 8f 84 e0 d4 27 80 4f ed fe 90 ae 37 af 97 25 .....'.O....7..% 00:22:29.438 00000090 1d 18 1d 3d 0f e7 e5 90 20 3f d2 71 6c c1 d6 08 ...=.... ?.ql... 00:22:29.438 000000a0 41 57 6d a7 a4 a7 33 02 57 0d f4 8f a7 ce ea 57 AWm...3.W......W 00:22:29.438 000000b0 05 dd 02 d9 1f 29 13 b2 9f d7 d9 af 0c 48 cb 2c .....).......H., 00:22:29.438 000000c0 92 50 05 b7 fd 57 0c 56 48 da c5 f8 2f c1 e0 17 .P...W.VH.../... 00:22:29.438 000000d0 78 53 b6 e2 4b 22 ff b0 c6 aa c2 c8 dc 8b 2d 96 xS..K"........-. 00:22:29.438 000000e0 c3 db 4c f7 fc 9a 54 77 7c ef d3 b4 03 a7 e1 3b ..L...Tw|......; 00:22:29.438 000000f0 22 0a 38 f8 e0 7f 39 76 cc e1 3e 44 50 12 17 8c ".8...9v..>DP... 00:22:29.438 [2024-09-27 13:27:00.967520] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key3, hash=1, dhgroup=1, seq=3775755180, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.438 [2024-09-27 13:27:00.968011] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.438 [2024-09-27 13:27:00.973172] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.438 [2024-09-27 13:27:00.973565] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.438 [2024-09-27 13:27:00.973749] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.438 [2024-09-27 13:27:00.974064] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.438 [2024-09-27 13:27:01.058382] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.438 [2024-09-27 13:27:01.058587] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.438 [2024-09-27 13:27:01.058886] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.438 [2024-09-27 13:27:01.059073] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.438 [2024-09-27 13:27:01.059344] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.438 ctrlr pubkey: 00:22:29.438 00000000 e1 c1 7c 76 57 9e 67 e6 85 61 cc 82 a7 c1 a4 82 ..|vW.g..a...... 00:22:29.438 00000010 99 1b 56 77 04 8d 58 51 6e 28 d2 e5 c0 0c 17 79 ..Vw..XQn(.....y 00:22:29.438 00000020 31 2f 5a 37 1c cd 33 56 48 c5 de 95 5f 16 ac 22 1/Z7..3VH..._.." 00:22:29.438 00000030 e1 2b ba 04 56 05 db e8 ce 04 9d 02 f3 13 22 8c .+..V.........". 00:22:29.438 00000040 b5 4b 4f ff e4 6d 4b 01 89 1b 88 dc da 1c 49 0a .KO..mK.......I. 00:22:29.438 00000050 79 21 92 dc 30 32 ff cf db 55 fc 02 34 56 31 34 y!..02...U..4V14 00:22:29.438 00000060 82 27 00 60 10 89 aa 93 db 9e 2a bc 27 7a 0e 4c .'.`......*.'z.L 00:22:29.438 00000070 55 22 6a 26 4a 03 d9 c6 9f 6e dd 64 45 1a 64 47 U"j&J....n.dE.dG 00:22:29.438 00000080 ea 09 94 23 9f 24 e2 d2 d0 44 25 73 da ce 23 07 ...#.$...D%s..#. 00:22:29.438 00000090 65 15 0d ff 99 56 5a 0f 53 67 91 ed 24 ad 94 78 e....VZ.Sg..$..x 00:22:29.438 000000a0 37 c9 83 eb 9c 89 0d 07 00 16 54 8e 76 de 48 31 7.........T.v.H1 00:22:29.438 000000b0 fb ec e4 63 bc fd 9c fd cc 2f 3a 54 7e bb 2e 48 ...c...../:T~..H 00:22:29.438 000000c0 bd 41 fc d2 2e 13 10 25 ff 24 5a 9c ca 48 43 f2 .A.....%.$Z..HC. 00:22:29.438 000000d0 45 cd bd 36 66 80 da f7 39 7f 37 18 35 b9 63 ca E..6f...9.7.5.c. 00:22:29.438 000000e0 48 ff 2d 88 af 3b 05 37 5e fc b8 f8 db 3d 3f 7c H.-..;.7^....=?| 00:22:29.438 000000f0 36 79 eb 20 f8 88 6c 9a af 55 bf 66 f2 3d d7 0f 6y. ..l..U.f.=.. 00:22:29.438 host pubkey: 00:22:29.438 00000000 44 1d dc 3a 96 18 0a ea b5 ac 1a 99 5e 01 59 a9 D..:........^.Y. 00:22:29.438 00000010 5a fd b3 0f ae cd 0c 54 41 cc c3 d0 2b 76 f5 6e Z......TA...+v.n 00:22:29.438 00000020 b5 d1 ac cd d4 72 f3 12 19 a0 ab 99 11 f6 ee da .....r.......... 00:22:29.438 00000030 8a fc aa 74 87 b4 6d 46 74 ba 20 5f 36 91 7b e1 ...t..mFt. _6.{. 00:22:29.438 00000040 18 c1 23 be 5b e9 87 f0 5a 1e 35 40 86 d0 3c 9c ..#.[...Z.5@..<. 00:22:29.438 00000050 47 1a 94 de e2 73 a8 75 2c f3 05 64 3d 44 08 22 G....s.u,..d=D." 00:22:29.438 00000060 34 c3 42 71 2a 97 f2 4d 18 fd db 31 1c ce 64 a4 4.Bq*..M...1..d. 00:22:29.438 00000070 c2 cc 41 f4 de ad 7e c3 b2 84 1b 14 15 3a ee f1 ..A...~......:.. 00:22:29.438 00000080 1b fb 67 01 d1 24 7a dd 19 1b c2 fe 8b f1 b8 9d ..g..$z......... 00:22:29.438 00000090 94 fb 72 6d 5a 61 1d 13 fa 49 b8 e9 2a 4f 55 19 ..rmZa...I..*OU. 00:22:29.438 000000a0 42 38 97 fa 53 0f 8f 4f 48 6b d4 40 a4 f5 86 df B8..S..OHk.@.... 00:22:29.438 000000b0 91 df d7 44 5e 71 3a 15 16 19 e5 bc 4c 7e 76 ff ...D^q:.....L~v. 00:22:29.438 000000c0 06 83 aa 09 07 df 9a 84 14 00 a6 9a 5a b4 a6 2a ............Z..* 00:22:29.438 000000d0 e1 b0 95 17 cf 74 43 9d 47 6c e4 83 7e 6c 90 c0 .....tC.Gl..~l.. 00:22:29.438 000000e0 52 30 14 4f bc 2d 4b 89 58 da 23 0e b4 6a c0 42 R0.O.-K.X.#..j.B 00:22:29.438 000000f0 7a 10 63 dd 42 ea 54 45 f5 02 d2 48 90 2d cd b0 z.c.B.TE...H.-.. 00:22:29.438 dh secret: 00:22:29.438 00000000 3b fa f6 11 cb dd f7 3f fa 75 6c 43 d2 69 48 30 ;......?.ulC.iH0 00:22:29.438 00000010 09 42 11 be 3b ac b0 a7 e4 e0 16 b0 c0 14 0b 6c .B..;..........l 00:22:29.438 00000020 0c b4 25 56 bb 74 2a 14 54 fa 79 c4 d0 8c a2 64 ..%V.t*.T.y....d 00:22:29.438 00000030 f3 cd ea 56 41 8f ea bb b4 b0 9b 37 16 43 da a0 ...VA......7.C.. 00:22:29.438 00000040 c5 f1 ea ab a5 db ec 3a b8 02 6d 0d 00 05 a7 48 .......:..m....H 00:22:29.438 00000050 6b 59 32 cf 40 4b 2e 69 7c e1 d8 ce 89 63 9f 52 kY2.@K.i|....c.R 00:22:29.438 00000060 39 5c 76 4d 92 58 77 08 8b e4 d0 4d ae 9d 02 04 9\vM.Xw....M.... 00:22:29.438 00000070 c7 d7 f5 cd a1 d2 1f 8f cd 50 34 4c df fe 4d d2 .........P4L..M. 00:22:29.438 00000080 01 99 08 b7 e9 42 26 4e a8 91 37 d5 59 3e 48 0c .....B&N..7.Y>H. 00:22:29.438 00000090 d8 4e 0e 0f c1 d2 36 54 8c 7a b6 f8 fe 27 ce 5d .N....6T.z...'.] 00:22:29.438 000000a0 b3 18 f3 e5 76 65 44 bc 46 81 e9 a6 cd ab 80 69 ....veD.F......i 00:22:29.438 000000b0 64 f5 36 01 b2 00 d6 e3 59 23 21 43 3f 54 c1 4c d.6.....Y#!C?T.L 00:22:29.438 000000c0 9d 07 37 66 89 ff 86 42 1e e5 73 54 92 b0 ed ed ..7f...B..sT.... 00:22:29.438 000000d0 b8 4a 4d c8 7e 01 d9 bc 92 ab 6f 9f da be d7 4e .JM.~.....o....N 00:22:29.438 000000e0 e5 b4 b6 ac 46 da a7 98 55 c5 a1 86 d4 be b4 85 ....F...U....... 00:22:29.438 000000f0 2f 64 7c 8f fc 24 8e ee b7 53 ab b9 d6 7e 05 fc /d|..$...S...~.. 00:22:29.438 [2024-09-27 13:27:01.067575] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key4, hash=1, dhgroup=1, seq=3775755181, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.438 [2024-09-27 13:27:01.068005] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.438 [2024-09-27 13:27:01.073842] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.438 [2024-09-27 13:27:01.074185] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.438 [2024-09-27 13:27:01.074263] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.438 [2024-09-27 13:27:01.127051] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.438 [2024-09-27 13:27:01.127257] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.438 [2024-09-27 13:27:01.127385] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 1 (ffdhe2048) 00:22:29.438 [2024-09-27 13:27:01.127576] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.438 [2024-09-27 13:27:01.127910] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.438 ctrlr pubkey: 00:22:29.438 00000000 e1 c1 7c 76 57 9e 67 e6 85 61 cc 82 a7 c1 a4 82 ..|vW.g..a...... 00:22:29.438 00000010 99 1b 56 77 04 8d 58 51 6e 28 d2 e5 c0 0c 17 79 ..Vw..XQn(.....y 00:22:29.438 00000020 31 2f 5a 37 1c cd 33 56 48 c5 de 95 5f 16 ac 22 1/Z7..3VH..._.." 00:22:29.438 00000030 e1 2b ba 04 56 05 db e8 ce 04 9d 02 f3 13 22 8c .+..V.........". 00:22:29.438 00000040 b5 4b 4f ff e4 6d 4b 01 89 1b 88 dc da 1c 49 0a .KO..mK.......I. 00:22:29.438 00000050 79 21 92 dc 30 32 ff cf db 55 fc 02 34 56 31 34 y!..02...U..4V14 00:22:29.438 00000060 82 27 00 60 10 89 aa 93 db 9e 2a bc 27 7a 0e 4c .'.`......*.'z.L 00:22:29.438 00000070 55 22 6a 26 4a 03 d9 c6 9f 6e dd 64 45 1a 64 47 U"j&J....n.dE.dG 00:22:29.438 00000080 ea 09 94 23 9f 24 e2 d2 d0 44 25 73 da ce 23 07 ...#.$...D%s..#. 00:22:29.438 00000090 65 15 0d ff 99 56 5a 0f 53 67 91 ed 24 ad 94 78 e....VZ.Sg..$..x 00:22:29.438 000000a0 37 c9 83 eb 9c 89 0d 07 00 16 54 8e 76 de 48 31 7.........T.v.H1 00:22:29.438 000000b0 fb ec e4 63 bc fd 9c fd cc 2f 3a 54 7e bb 2e 48 ...c...../:T~..H 00:22:29.438 000000c0 bd 41 fc d2 2e 13 10 25 ff 24 5a 9c ca 48 43 f2 .A.....%.$Z..HC. 00:22:29.438 000000d0 45 cd bd 36 66 80 da f7 39 7f 37 18 35 b9 63 ca E..6f...9.7.5.c. 00:22:29.438 000000e0 48 ff 2d 88 af 3b 05 37 5e fc b8 f8 db 3d 3f 7c H.-..;.7^....=?| 00:22:29.438 000000f0 36 79 eb 20 f8 88 6c 9a af 55 bf 66 f2 3d d7 0f 6y. ..l..U.f.=.. 00:22:29.438 host pubkey: 00:22:29.438 00000000 14 c9 10 2d b7 bb 63 da e5 c3 d1 25 5b 60 04 d4 ...-..c....%[`.. 00:22:29.438 00000010 33 e5 08 cc c1 fb f5 93 10 12 fa 91 19 7e a2 9b 3............~.. 00:22:29.438 00000020 c0 b3 25 d0 33 c4 98 78 be 7f 54 6b f7 d7 f7 05 ..%.3..x..Tk.... 00:22:29.438 00000030 6d 0d 4d 00 5a fa 1b f8 2e 9a e1 fb fa 76 43 1e m.M.Z........vC. 00:22:29.438 00000040 27 61 46 95 2f 46 af 7e 73 8a cb 95 38 c6 d9 d7 'aF./F.~s...8... 00:22:29.438 00000050 49 a8 c2 d4 94 82 7f 35 6a d1 0f 16 9f 83 c6 27 I......5j......' 00:22:29.438 00000060 8b 85 48 bf 26 ca ee 81 7a c1 fe 82 fc f4 4d 9e ..H.&...z.....M. 00:22:29.438 00000070 bd a5 4b 36 ff 04 93 78 83 f7 1e 83 74 d0 8c b0 ..K6...x....t... 00:22:29.438 00000080 94 cb 82 1e bb d7 46 aa b2 42 d0 b4 7c c8 7b 07 ......F..B..|.{. 00:22:29.438 00000090 47 d5 cd 82 62 5b 6b 7b 53 d4 b7 bb 8f ab 26 fe G...b[k{S.....&. 00:22:29.438 000000a0 84 c2 4c b4 f9 54 1c 09 26 ab 0d 86 ed 18 02 f4 ..L..T..&....... 00:22:29.439 000000b0 33 66 d8 5e a4 b1 77 d5 ee b7 47 8b 8e 8c 10 a3 3f.^..w...G..... 00:22:29.439 000000c0 f7 91 ad 56 87 dc d0 59 73 8c c4 60 17 6a 52 3a ...V...Ys..`.jR: 00:22:29.439 000000d0 63 b9 b6 de 2b 37 3f 3e a6 9a 39 f9 ac bf 5e 85 c...+7?>..9...^. 00:22:29.439 000000e0 bf 0c be be 53 89 42 82 6e 93 d5 93 b8 ac 48 22 ....S.B.n.....H" 00:22:29.439 000000f0 71 9c b2 7b a6 44 4c 72 0a 84 2a 1c 42 c7 9c c7 q..{.DLr..*.B... 00:22:29.439 dh secret: 00:22:29.439 00000000 22 cf 32 c7 dc 68 69 ff 14 68 9e f5 62 15 40 c6 ".2..hi..h..b.@. 00:22:29.439 00000010 05 ca d8 14 a2 65 10 05 61 e6 75 ba 27 63 1c 2f .....e..a.u.'c./ 00:22:29.439 00000020 4a 20 bf 40 69 6a 07 83 e9 22 e6 03 7b 68 59 27 J .@ij..."..{hY' 00:22:29.439 00000030 43 ee f5 e6 77 57 76 aa bf 11 97 81 65 d3 d5 1e C...wWv.....e... 00:22:29.439 00000040 11 25 b4 31 3e 20 46 25 99 57 6e 94 48 2b 1a f9 .%.1> F%.Wn.H+.. 00:22:29.439 00000050 fe 1c f2 51 52 62 02 11 c1 3c 08 6d 8a 94 8f cb ...QRb...<.m.... 00:22:29.439 00000060 47 69 98 fb 0d 18 a0 0e cd ed fd 27 c3 8b e2 f7 Gi.........'.... 00:22:29.439 00000070 e9 59 e5 be bc 09 e9 94 9d ee 28 e6 89 eb 9c cd .Y........(..... 00:22:29.439 00000080 a8 4a 80 35 d5 65 9f 91 3b 25 52 e2 80 14 6c 09 .J.5.e..;%R...l. 00:22:29.439 00000090 78 6f b2 ba 00 7a bc 18 3b 8e b1 35 43 84 65 37 xo...z..;..5C.e7 00:22:29.439 000000a0 5e 26 11 0a cb dc 7e 96 7f 18 af 36 6f f4 fe 75 ^&....~....6o..u 00:22:29.439 000000b0 7f 68 b4 53 f2 15 40 26 36 2c 40 cf 49 7f 0d 1b .h.S..@&6,@.I... 00:22:29.439 000000c0 8f 32 02 17 06 fd 0d ec 38 41 9c b3 76 6f 82 ac .2......8A..vo.. 00:22:29.439 000000d0 76 a9 0b f3 2f 37 4b ea d5 a9 e7 51 e7 24 95 fb v.../7K....Q.$.. 00:22:29.439 000000e0 83 c7 85 3d 3b 7a 54 8f 55 62 e2 0e 4d ea 8c e5 ...=;zT.Ub..M... 00:22:29.439 000000f0 81 29 14 25 ce 88 ff 3a 9f 7b ab 86 3d 0d dd 88 .).%...:.{..=... 00:22:29.439 [2024-09-27 13:27:01.136656] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key4, hash=1, dhgroup=1, seq=3775755182, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.439 [2024-09-27 13:27:01.137106] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.439 [2024-09-27 13:27:01.143171] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.439 [2024-09-27 13:27:01.143667] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.439 [2024-09-27 13:27:01.143859] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.439 [2024-09-27 13:27:01.642194] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.439 [2024-09-27 13:27:01.642360] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.439 [2024-09-27 13:27:01.642637] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 2 (ffdhe3072) 00:22:29.439 [2024-09-27 13:27:01.642809] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.439 [2024-09-27 13:27:01.643021] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.439 ctrlr pubkey: 00:22:29.439 00000000 e7 a0 4e 2b a7 ff 47 46 f2 13 43 37 b7 cd 11 57 ..N+..GF..C7...W 00:22:29.439 00000010 06 3f 83 12 81 91 94 96 0b d0 64 c3 aa a8 3f 90 .?........d...?. 00:22:29.439 00000020 b9 fb 55 13 dd 56 87 5a d6 58 bd b2 6d 4e 7f a7 ..U..V.Z.X..mN.. 00:22:29.439 00000030 db 94 9d 79 88 7b 82 43 ad db e8 8b dd 5f 20 a1 ...y.{.C....._ . 00:22:29.439 00000040 4c 33 d1 0e d4 e4 b6 06 b6 f0 9b b9 df 4f dc 06 L3...........O.. 00:22:29.439 00000050 63 a5 e3 2d 06 61 14 f7 c2 cb e8 5d 4a 29 40 e9 c..-.a.....]J)@. 00:22:29.439 00000060 b7 d6 58 24 39 a1 81 d1 f4 57 0a c0 8c a9 8a 38 ..X$9....W.....8 00:22:29.439 00000070 19 1b 6c 8a f4 c7 0c 3d 5d f6 a6 5f a4 32 99 88 ..l....=].._.2.. 00:22:29.439 00000080 e3 29 22 87 6d 63 65 8a 65 ba 7d 40 9e 1e c4 49 .)".mce.e.}@...I 00:22:29.439 00000090 3a 5b b5 cd a6 60 80 0a 44 b1 f7 16 3e d3 af 1d :[...`..D...>... 00:22:29.439 000000a0 2d c3 5a 3a 98 97 fc bc de 16 c5 68 72 1a 27 e2 -.Z:.......hr.'. 00:22:29.439 000000b0 c7 7d 0d d5 d5 85 a4 8f 86 4b 6b 7f 59 60 9d a0 .}.......Kk.Y`.. 00:22:29.439 000000c0 82 5e 3f b5 14 e1 ff b1 49 17 3e 69 64 e3 b8 01 .^?.....I.>id... 00:22:29.439 000000d0 51 0b 3d e7 ee 06 fb 5e d9 83 0d 92 b7 fc 05 b1 Q.=....^........ 00:22:29.439 000000e0 f4 ab 31 c6 fd db 80 bb 6a 9d 67 a3 17 1e 1d 9d ..1.....j.g..... 00:22:29.439 000000f0 a4 8a 41 bd 7d 6f 33 42 a1 3b 09 ed e8 17 cd 64 ..A.}o3B.;.....d 00:22:29.439 00000100 fd b6 67 12 02 db e4 d7 b1 b1 01 bd 3f 58 d3 ee ..g.........?X.. 00:22:29.439 00000110 ac 0f d2 83 eb b2 89 cd 8e 8b 0b 2f d8 0a 5f cc .........../.._. 00:22:29.439 00000120 31 40 39 63 f2 97 24 d5 5f fa 9c ea c7 b9 ec af 1@9c..$._....... 00:22:29.439 00000130 a0 85 15 76 ea 18 f1 0a 04 78 23 f7 b4 59 7f 16 ...v.....x#..Y.. 00:22:29.439 00000140 f9 b0 ad b6 2e c7 c4 89 74 03 17 e1 58 07 82 8f ........t...X... 00:22:29.439 00000150 b2 40 d6 a6 1e f1 29 e8 8b 5a 8d 33 80 e0 91 23 .@....)..Z.3...# 00:22:29.439 00000160 d9 f3 27 66 98 75 0d 5e 59 0e e4 8d 5c ed 20 ec ..'f.u.^Y...\. . 00:22:29.439 00000170 db 7c 82 c5 c7 98 65 3a 0b 20 09 f2 d9 30 90 b6 .|....e:. ...0.. 00:22:29.439 host pubkey: 00:22:29.439 00000000 20 8d a3 d3 36 50 73 1a f2 46 bc 4b 82 3b fc df ...6Ps..F.K.;.. 00:22:29.439 00000010 53 f8 a6 1e 71 f3 d6 3d 9e a8 d9 8b 49 83 e6 5b S...q..=....I..[ 00:22:29.439 00000020 c0 df 13 cc 59 5f 60 c5 ef fe 53 34 4d 17 eb 23 ....Y_`...S4M..# 00:22:29.439 00000030 5d 4b f7 79 11 66 ce 22 90 bc 8d d1 53 c8 bc d6 ]K.y.f."....S... 00:22:29.439 00000040 d6 e0 83 8a 2b ef 0a 77 c3 c2 bf fe 84 22 99 17 ....+..w.....".. 00:22:29.439 00000050 47 97 bd a9 22 77 0a dc 45 8b 29 d2 ac f4 19 ed G..."w..E.)..... 00:22:29.439 00000060 91 28 a1 8f f0 5a 96 29 c9 30 63 57 22 d8 44 72 .(...Z.).0cW".Dr 00:22:29.439 00000070 12 8d 30 17 80 5b 42 f2 0b 3b dc 76 5e 44 60 11 ..0..[B..;.v^D`. 00:22:29.439 00000080 d0 2b 33 f7 48 14 80 2b ae ac c3 c4 a3 f5 23 19 .+3.H..+......#. 00:22:29.439 00000090 48 83 7c cc d4 fb 20 b2 7e 87 38 9c 95 41 fc fa H.|... .~.8..A.. 00:22:29.439 000000a0 79 5b 33 df e6 d4 43 8e 1c 4b 61 26 0c 39 e6 75 y[3...C..Ka&.9.u 00:22:29.439 000000b0 b1 5a fb 22 54 2b ff 98 31 34 c6 86 7a 53 a2 d7 .Z."T+..14..zS.. 00:22:29.439 000000c0 a3 df 00 92 53 77 2f 04 bc a6 9e de f3 57 85 68 ....Sw/......W.h 00:22:29.439 000000d0 40 08 05 2a a2 de ec b1 77 d7 0f d6 a0 ab 59 01 @..*....w.....Y. 00:22:29.439 000000e0 12 a0 52 42 51 ed ab ba 7d 1c bd 65 62 ec d2 52 ..RBQ...}..eb..R 00:22:29.439 000000f0 62 91 9f 41 d5 7c 64 87 c5 6c 3c 9a 2b 3f 7d 53 b..A.|d..l<.+?}S 00:22:29.439 00000100 da 30 c8 a2 ba 95 cd a9 55 8d 1d 08 16 73 4d 90 .0......U....sM. 00:22:29.439 00000110 23 c6 49 f0 29 4a 7f f6 af 22 00 41 ac ce 8e 3f #.I.)J...".A...? 00:22:29.439 00000120 5c 7e f0 6d 25 b7 f3 54 55 b4 e8 a2 41 66 35 a4 \~.m%..TU...Af5. 00:22:29.439 00000130 7f d0 42 e2 d1 f8 80 6d 72 45 13 8a 6b 8a 51 dc ..B....mrE..k.Q. 00:22:29.439 00000140 3d e9 1b cf da 7a 2e 81 fe c0 6b a4 d1 6e 92 09 =....z....k..n.. 00:22:29.439 00000150 54 66 fa bd 8e 0b 6d 33 ca d2 da 1f 89 d0 6b d2 Tf....m3......k. 00:22:29.439 00000160 a1 8f b5 64 ef ca 34 e1 03 cd 9d 97 b5 d7 e6 3e ...d..4........> 00:22:29.439 00000170 ff ed 3f 2e d2 54 5c ab 3a c4 10 f4 d3 b4 45 cf ..?..T\.:.....E. 00:22:29.439 dh secret: 00:22:29.439 00000000 bb 30 1f fe dd 9a e4 27 b6 5e d7 33 b3 cb ff ba .0.....'.^.3.... 00:22:29.439 00000010 ad 35 c7 34 25 75 2c fd ef 71 76 b2 c9 1b 49 d4 .5.4%u,..qv...I. 00:22:29.439 00000020 23 3d ff 4b e6 55 60 5a ec a1 1d ed 4d 60 d2 5f #=.K.U`Z....M`._ 00:22:29.439 00000030 9a 96 8e e4 cc 67 75 80 1f e5 96 00 cd 3d 90 d1 .....gu......=.. 00:22:29.439 00000040 18 66 2c 6d eb 4e 44 03 9b 1b 00 fb 0b dd 68 aa .f,m.ND.......h. 00:22:29.439 00000050 87 e7 7b d9 6e c4 fb 73 9f eb ff e3 c7 0b c8 31 ..{.n..s.......1 00:22:29.439 00000060 39 3e 29 43 46 44 a8 a9 c7 d0 1d 6b 52 63 82 92 9>)CFD.....kRc.. 00:22:29.439 00000070 71 17 59 6c a9 d4 d7 40 e6 07 00 4e d6 a9 ed 3f q.Yl...@...N...? 00:22:29.439 00000080 c1 89 d7 79 b7 af fb 13 78 2f 5f 62 25 e7 43 3f ...y....x/_b%.C? 00:22:29.439 00000090 da 6d df 8f c1 c8 c9 62 47 1b eb a7 3f d7 50 a3 .m.....bG...?.P. 00:22:29.439 000000a0 42 84 50 83 20 3e 17 c3 93 27 41 49 91 67 96 56 B.P. >...'AI.g.V 00:22:29.439 000000b0 79 b6 4d f9 be d9 89 33 6f d5 72 d1 18 af 55 0d y.M....3o.r...U. 00:22:29.439 000000c0 29 e1 d7 52 06 fe 25 04 da a4 f1 1d 36 75 a1 e4 )..R..%.....6u.. 00:22:29.439 000000d0 ee c7 08 9c 7d c5 e7 bc 4a ed c1 4c 3f d0 33 d5 ....}...J..L?.3. 00:22:29.439 000000e0 64 87 d5 a1 e5 80 96 34 dd c4 85 85 67 84 74 94 d......4....g.t. 00:22:29.439 000000f0 6b b7 39 97 ff ea 70 ac 6b 0f a7 a2 d9 08 aa a0 k.9...p.k....... 00:22:29.439 00000100 4a 0f 13 59 f8 3c a1 ff d3 22 87 1a 08 1c a8 d2 J..Y.<..."...... 00:22:29.439 00000110 1a 67 58 dd ea 50 52 ff 8d f6 bc 1e 69 a2 c2 75 .gX..PR.....i..u 00:22:29.439 00000120 2d ce 82 f7 9a 54 67 1d af 96 10 4d 75 cf 2a 93 -....Tg....Mu.*. 00:22:29.439 00000130 22 e8 f4 80 2d 06 8e f0 7f bd 83 c7 86 14 48 61 "...-.........Ha 00:22:29.439 00000140 8c 11 cc 04 66 5e f1 2b a3 d5 7d 48 79 58 73 48 ....f^.+..}HyXsH 00:22:29.439 00000150 8f 53 15 f1 5b d1 c2 e1 dd a2 1f 1e e8 1f e1 9e .S..[........... 00:22:29.439 00000160 ab f1 de 1a aa c8 7f a2 9a 59 74 d4 fe 2f 8a b0 .........Yt../.. 00:22:29.439 00000170 42 1b 27 4b 85 31 05 b8 3c 8a 6f 4f e4 d6 c5 b4 B.'K.1..<.oO.... 00:22:29.439 [2024-09-27 13:27:01.656759] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key0, hash=1, dhgroup=2, seq=3775755183, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.439 [2024-09-27 13:27:01.657132] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.439 [2024-09-27 13:27:01.665944] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.439 [2024-09-27 13:27:01.666463] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.439 [2024-09-27 13:27:01.666631] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.439 [2024-09-27 13:27:01.666926] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.439 [2024-09-27 13:27:01.719030] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.439 [2024-09-27 13:27:01.719399] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.439 [2024-09-27 13:27:01.719612] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.439 [2024-09-27 13:27:01.719724] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.439 [2024-09-27 13:27:01.720125] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.439 ctrlr pubkey: 00:22:29.439 00000000 e7 a0 4e 2b a7 ff 47 46 f2 13 43 37 b7 cd 11 57 ..N+..GF..C7...W 00:22:29.440 00000010 06 3f 83 12 81 91 94 96 0b d0 64 c3 aa a8 3f 90 .?........d...?. 00:22:29.440 00000020 b9 fb 55 13 dd 56 87 5a d6 58 bd b2 6d 4e 7f a7 ..U..V.Z.X..mN.. 00:22:29.440 00000030 db 94 9d 79 88 7b 82 43 ad db e8 8b dd 5f 20 a1 ...y.{.C....._ . 00:22:29.440 00000040 4c 33 d1 0e d4 e4 b6 06 b6 f0 9b b9 df 4f dc 06 L3...........O.. 00:22:29.440 00000050 63 a5 e3 2d 06 61 14 f7 c2 cb e8 5d 4a 29 40 e9 c..-.a.....]J)@. 00:22:29.440 00000060 b7 d6 58 24 39 a1 81 d1 f4 57 0a c0 8c a9 8a 38 ..X$9....W.....8 00:22:29.440 00000070 19 1b 6c 8a f4 c7 0c 3d 5d f6 a6 5f a4 32 99 88 ..l....=].._.2.. 00:22:29.440 00000080 e3 29 22 87 6d 63 65 8a 65 ba 7d 40 9e 1e c4 49 .)".mce.e.}@...I 00:22:29.440 00000090 3a 5b b5 cd a6 60 80 0a 44 b1 f7 16 3e d3 af 1d :[...`..D...>... 00:22:29.440 000000a0 2d c3 5a 3a 98 97 fc bc de 16 c5 68 72 1a 27 e2 -.Z:.......hr.'. 00:22:29.440 000000b0 c7 7d 0d d5 d5 85 a4 8f 86 4b 6b 7f 59 60 9d a0 .}.......Kk.Y`.. 00:22:29.440 000000c0 82 5e 3f b5 14 e1 ff b1 49 17 3e 69 64 e3 b8 01 .^?.....I.>id... 00:22:29.440 000000d0 51 0b 3d e7 ee 06 fb 5e d9 83 0d 92 b7 fc 05 b1 Q.=....^........ 00:22:29.440 000000e0 f4 ab 31 c6 fd db 80 bb 6a 9d 67 a3 17 1e 1d 9d ..1.....j.g..... 00:22:29.440 000000f0 a4 8a 41 bd 7d 6f 33 42 a1 3b 09 ed e8 17 cd 64 ..A.}o3B.;.....d 00:22:29.440 00000100 fd b6 67 12 02 db e4 d7 b1 b1 01 bd 3f 58 d3 ee ..g.........?X.. 00:22:29.440 00000110 ac 0f d2 83 eb b2 89 cd 8e 8b 0b 2f d8 0a 5f cc .........../.._. 00:22:29.440 00000120 31 40 39 63 f2 97 24 d5 5f fa 9c ea c7 b9 ec af 1@9c..$._....... 00:22:29.440 00000130 a0 85 15 76 ea 18 f1 0a 04 78 23 f7 b4 59 7f 16 ...v.....x#..Y.. 00:22:29.440 00000140 f9 b0 ad b6 2e c7 c4 89 74 03 17 e1 58 07 82 8f ........t...X... 00:22:29.440 00000150 b2 40 d6 a6 1e f1 29 e8 8b 5a 8d 33 80 e0 91 23 .@....)..Z.3...# 00:22:29.440 00000160 d9 f3 27 66 98 75 0d 5e 59 0e e4 8d 5c ed 20 ec ..'f.u.^Y...\. . 00:22:29.440 00000170 db 7c 82 c5 c7 98 65 3a 0b 20 09 f2 d9 30 90 b6 .|....e:. ...0.. 00:22:29.440 host pubkey: 00:22:29.440 00000000 74 00 a7 fe 30 bc 93 d4 45 d8 34 53 e0 31 0f b7 t...0...E.4S.1.. 00:22:29.440 00000010 28 a3 45 fd ef 6f a1 fe d2 8c 1f 1e 5f d3 3b 8c (.E..o......_.;. 00:22:29.440 00000020 dc 2f f9 ac 96 48 8e ce 07 bb 2a 08 1a a3 7b 72 ./...H....*...{r 00:22:29.440 00000030 13 8f d4 8a 2b c7 55 a4 98 d0 3b 63 e7 17 1a df ....+.U...;c.... 00:22:29.440 00000040 d9 e4 c6 b4 a6 1b d7 66 18 a2 e7 ed 7b 53 d0 22 .......f....{S." 00:22:29.440 00000050 65 b8 4f 4e 2a b2 96 b6 c1 0a a2 1b eb 7e 3b 2f e.ON*........~;/ 00:22:29.440 00000060 51 44 1b ea 04 3f fd f1 22 ae c9 89 3b 25 4b 8e QD...?.."...;%K. 00:22:29.440 00000070 8f 8d 33 03 44 21 14 c5 89 5b cb 6d b6 02 df 55 ..3.D!...[.m...U 00:22:29.440 00000080 b4 7f d9 8f 80 09 a8 09 d2 93 e6 fb fa 4c c5 e3 .............L.. 00:22:29.440 00000090 6d 63 70 40 9a 15 1f 03 c5 13 f3 c1 ee 4e bd f0 mcp@.........N.. 00:22:29.440 000000a0 b4 25 34 18 09 3a 25 98 cf cb 77 0d b0 7e 7c a0 .%4..:%...w..~|. 00:22:29.440 000000b0 01 34 55 04 f3 21 a3 50 67 f7 fa e6 00 dc d1 05 .4U..!.Pg....... 00:22:29.440 000000c0 6b 23 9f 0c 2c 07 4f 3b 1b 6c d0 04 61 6f bc 2e k#..,.O;.l..ao.. 00:22:29.440 000000d0 59 71 5d f5 17 6f 75 0a 4d e9 be 34 dc 8c 58 d9 Yq]..ou.M..4..X. 00:22:29.440 000000e0 6f 38 65 c7 e3 66 b4 8a cc 71 0e a8 be 7b 44 d7 o8e..f...q...{D. 00:22:29.440 000000f0 f7 ed ae 51 8d 82 1e 53 ae 73 16 d8 4b 4c 3e d4 ...Q...S.s..KL>. 00:22:29.440 00000100 ed 2b 2c aa 15 8b 52 61 35 56 c8 5c 3e fd fc ee .+,...Ra5V.\>... 00:22:29.440 00000110 4e 70 6b bb 51 2e a1 99 af f3 38 c8 bb bf e0 78 Npk.Q.....8....x 00:22:29.440 00000120 03 43 7e 00 84 4b 74 e8 c0 f5 61 11 c5 7a 79 5c .C~..Kt...a..zy\ 00:22:29.440 00000130 bb 78 4d 70 ba f6 4f a0 2d f8 2e 85 36 68 27 69 .xMp..O.-...6h'i 00:22:29.440 00000140 ee ff 30 61 28 01 70 90 c3 d5 fd 43 02 c2 71 58 ..0a(.p....C..qX 00:22:29.440 00000150 5c cf f6 d6 8c b0 de 38 0e e0 0e 16 cc dd 94 05 \......8........ 00:22:29.440 00000160 80 7f cf 5e 92 df 72 9d 4f 31 a0 72 ee 6b 00 36 ...^..r.O1.r.k.6 00:22:29.440 00000170 23 91 90 ff 45 60 3f d5 cf dc 6f 59 e2 30 98 9c #...E`?...oY.0.. 00:22:29.440 dh secret: 00:22:29.440 00000000 52 da 94 d8 eb eb 49 35 d3 11 3a 59 8d d1 da de R.....I5..:Y.... 00:22:29.440 00000010 a5 fe dc f0 75 04 21 1a 32 05 60 ae ff ae e7 ed ....u.!.2.`..... 00:22:29.440 00000020 1e 44 4e 1a 46 81 5e 94 7a 50 1b bb f2 c7 fa 9f .DN.F.^.zP...... 00:22:29.440 00000030 94 e5 7d f7 f2 15 85 a8 48 a5 e0 f4 9e cb 44 d2 ..}.....H.....D. 00:22:29.440 00000040 28 4c 4c 7e 27 f9 4d 87 dd 20 77 c7 01 1e 31 f3 (LL~'.M.. w...1. 00:22:29.440 00000050 96 2f b9 c8 11 b2 d1 86 5f a5 6a 13 e4 cd 40 d0 ./......_.j...@. 00:22:29.440 00000060 3d 4b 92 99 96 c0 66 22 58 46 94 fe 9f d3 80 f2 =K....f"XF...... 00:22:29.440 00000070 5f 75 8d 7a ab 35 21 47 9b 87 e2 af 1e 57 b7 a7 _u.z.5!G.....W.. 00:22:29.440 00000080 12 8f c9 d5 ae c7 45 71 e6 ad 9a 14 8f ca e3 c7 ......Eq........ 00:22:29.440 00000090 9c bf de b8 e5 85 0d 09 ff 5a 27 8f a2 94 61 3c .........Z'...a< 00:22:29.440 000000a0 c3 19 5e b9 5d 1d 6c 38 4b 50 6c f3 86 2c f7 92 ..^.].l8KPl..,.. 00:22:29.440 000000b0 5b 72 1a 22 76 b3 fc bb 79 f3 b4 92 b3 af b5 5b [r."v...y......[ 00:22:29.440 000000c0 05 30 ed 0f 96 73 5b 77 69 ab 39 f2 3e 2b fb e0 .0...s[wi.9.>+.. 00:22:29.440 000000d0 96 2c 6e 81 3f b2 87 0b 4b 61 19 d2 34 28 2d b6 .,n.?...Ka..4(-. 00:22:29.440 000000e0 7d 20 b3 41 06 84 9f f0 d5 a0 50 dd 71 08 ba eb } .A......P.q... 00:22:29.440 000000f0 78 54 30 86 dc 63 0b 76 62 10 a5 ec 2a 94 02 c8 xT0..c.vb...*... 00:22:29.440 00000100 01 f8 36 6e 20 f5 07 51 f0 4b 4a 24 8c c5 97 bc ..6n ..Q.KJ$.... 00:22:29.440 00000110 d1 04 b5 5f 21 da e2 18 70 6e 51 b6 4d ec 6b 86 ..._!...pnQ.M.k. 00:22:29.440 00000120 8a fb 81 02 e4 79 09 65 0b 49 d6 4d 4d ec f9 6d .....y.e.I.MM..m 00:22:29.440 00000130 5b bf 10 48 8a 17 4c 9d fb d2 29 36 09 81 3b 03 [..H..L...)6..;. 00:22:29.440 00000140 9a fe 57 b4 c8 1e 3f 8d 5b ae 1b d3 77 27 20 2a ..W...?.[...w' * 00:22:29.440 00000150 73 82 04 92 51 fb af 08 b3 94 62 37 28 6a 2a 60 s...Q.....b7(j*` 00:22:29.440 00000160 16 0a e1 be 17 d4 15 dd e9 ca 1c 58 3e ef 2b 4e ...........X>.+N 00:22:29.440 00000170 98 d8 3b a9 76 df 1c ce 0b 94 40 41 ae b9 e9 fc ..;.v.....@A.... 00:22:29.440 [2024-09-27 13:27:01.733644] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key0, hash=1, dhgroup=2, seq=3775755184, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.440 [2024-09-27 13:27:01.734024] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.440 [2024-09-27 13:27:01.741910] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.440 [2024-09-27 13:27:01.742417] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.440 [2024-09-27 13:27:01.742540] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.440 [2024-09-27 13:27:01.742857] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.440 [2024-09-27 13:27:01.839993] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.440 [2024-09-27 13:27:01.840236] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.440 [2024-09-27 13:27:01.840583] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 2 (ffdhe3072) 00:22:29.440 [2024-09-27 13:27:01.840791] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.440 [2024-09-27 13:27:01.841023] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.440 ctrlr pubkey: 00:22:29.440 00000000 31 fa e9 65 fb 86 a3 07 46 a7 c6 c2 86 f5 03 2d 1..e....F......- 00:22:29.440 00000010 d8 30 19 00 8b da fa 31 be ce 6e 6d 3b 1f 33 bc .0.....1..nm;.3. 00:22:29.440 00000020 dd e5 ed ec 0a 30 d5 16 ad 44 c2 ac c4 ff a6 bf .....0...D...... 00:22:29.440 00000030 53 6f 74 31 7e d9 c8 42 a3 62 cd 47 f8 b0 5a 51 Sot1~..B.b.G..ZQ 00:22:29.440 00000040 cb 3a 52 4e 1d cc 05 12 08 e6 f3 e6 0c 08 1c 92 .:RN............ 00:22:29.440 00000050 90 ed 54 70 96 bf a9 64 64 83 39 c4 13 40 58 1e ..Tp...dd.9..@X. 00:22:29.440 00000060 02 8f e6 3f 4d c0 ac e2 aa 45 2f ba 0d 53 cf 4b ...?M....E/..S.K 00:22:29.440 00000070 10 19 4d d5 12 a0 86 e6 00 30 11 76 b6 60 17 57 ..M......0.v.`.W 00:22:29.440 00000080 97 10 db 9c ef 9b f5 c2 d6 7d 58 f8 85 9d 98 a3 .........}X..... 00:22:29.440 00000090 6c 60 32 82 1a 2d b1 2f fb af 26 7c e6 f4 9c 22 l`2..-./..&|..." 00:22:29.440 000000a0 45 3d 20 d8 8c 50 de 87 dd ba e2 6a b7 ed fd ad E= ..P.....j.... 00:22:29.440 000000b0 7b af 10 e3 66 b5 a3 b2 d3 64 1f be 24 9c 38 af {...f....d..$.8. 00:22:29.440 000000c0 33 55 b9 a2 37 f2 ca bd a6 f7 73 05 1f db 34 ea 3U..7.....s...4. 00:22:29.440 000000d0 c7 4d 16 ed 58 5d f8 12 68 8b b1 c7 e5 31 26 87 .M..X]..h....1&. 00:22:29.440 000000e0 4a f9 9e ed 59 cc 54 a6 1e 91 67 a3 e7 46 81 ea J...Y.T...g..F.. 00:22:29.440 000000f0 23 47 a8 40 3c 8b 3d b4 e6 f4 0d 03 02 76 47 be #G.@<.=......vG. 00:22:29.440 00000100 e6 99 98 8e d4 99 0b 5e d2 e2 43 d3 6c aa 99 82 .......^..C.l... 00:22:29.440 00000110 90 57 55 c6 cd 4d 1c 5c 87 45 4c ca d0 c5 41 ca .WU..M.\.EL...A. 00:22:29.440 00000120 40 1b 4b 2b e5 ac c6 c4 5d c9 7b 3f b7 f4 71 6d @.K+....].{?..qm 00:22:29.440 00000130 5a 56 8e 6f 71 da a3 01 43 39 df 53 e4 6d 1c d7 ZV.oq...C9.S.m.. 00:22:29.440 00000140 c8 47 e1 e8 1f e2 a1 a2 d0 76 68 4a a9 62 8e 90 .G.......vhJ.b.. 00:22:29.440 00000150 29 a1 f0 18 71 78 70 d9 29 20 50 45 c6 c5 5e 87 )...qxp.) PE..^. 00:22:29.440 00000160 ad e2 4e 5b 61 74 e1 df 5f 0d b5 a8 76 c2 74 83 ..N[at.._...v.t. 00:22:29.440 00000170 b2 67 14 72 03 21 5e 83 05 ea dc e7 fc 5e 3c 2a .g.r.!^......^<* 00:22:29.440 host pubkey: 00:22:29.440 00000000 1f 31 02 de 88 ac 8c 80 a2 44 12 80 d6 bc 6b e6 .1.......D....k. 00:22:29.440 00000010 a8 c3 0c 4a 2c 3f ff 8a b2 be 40 8f 07 0f b9 3c ...J,?....@....< 00:22:29.440 00000020 27 16 0b 0a 56 3a b6 a1 f2 c8 d7 20 44 d3 d4 94 '...V:..... D... 00:22:29.440 00000030 22 7b 60 67 7f d5 b8 a0 56 4b b0 0a 0d 4f 24 8b "{`g....VK...O$. 00:22:29.440 00000040 fc 29 54 2e ea 86 27 78 c3 c8 38 88 a1 81 90 43 .)T...'x..8....C 00:22:29.440 00000050 66 c2 ba c0 46 43 7d da 2f 6c 24 cb 51 7f 79 1d f...FC}./l$.Q.y. 00:22:29.440 00000060 d3 78 2a 2a 6c 6b 89 9d 43 26 bf 99 fb eb e3 91 .x**lk..C&...... 00:22:29.440 00000070 24 c8 1c c3 bb 38 be 17 2b 0e ab 3c bc 76 a4 12 $....8..+..<.v.. 00:22:29.440 00000080 c0 9a 39 0e e5 1a 28 cd 9a fb 7d f0 fa 8e f1 49 ..9...(...}....I 00:22:29.440 00000090 32 cc e0 12 e9 32 87 d8 59 d6 54 62 99 90 36 52 2....2..Y.Tb..6R 00:22:29.440 000000a0 d6 24 9f cf 88 67 fc 87 d1 cd 31 3e 1e 0d 05 a3 .$...g....1>.... 00:22:29.440 000000b0 15 3b 6f 5a 4d c8 96 f6 6f 0d fc fe 9e 29 16 8f .;oZM...o....).. 00:22:29.440 000000c0 b8 8a fa 7d 56 47 c6 43 1e d7 6d 41 3b 53 4f 9b ...}VG.C..mA;SO. 00:22:29.440 000000d0 72 c0 5b c4 c8 6d 56 73 64 2f 3c 37 b7 d9 07 00 r.[..mVsd/<7.... 00:22:29.440 000000e0 08 3a 76 04 f1 39 db db b8 5b 9d 67 3d 85 a7 c9 .:v..9...[.g=... 00:22:29.440 000000f0 87 b0 ee 07 eb 24 09 2e 57 3a 11 8c dc 0a 7a 70 .....$..W:....zp 00:22:29.440 00000100 0d c5 52 0a 39 65 b0 a3 cb 56 90 5b c3 8e 9d d1 ..R.9e...V.[.... 00:22:29.440 00000110 6b b2 76 45 73 5d 62 d6 de 10 9e f2 f2 61 1c 3d k.vEs]b......a.= 00:22:29.440 00000120 ef c6 0a 7d d4 31 f2 c3 85 66 76 9d 94 08 82 fe ...}.1...fv..... 00:22:29.441 00000130 2d 43 6d 0d b6 06 bd 58 77 b2 b4 e1 81 e5 34 7c -Cm....Xw.....4| 00:22:29.441 00000140 23 1b 44 62 c8 76 ad 76 ea b1 8a a9 1a 1f 75 d1 #.Db.v.v......u. 00:22:29.441 00000150 36 30 37 58 ff 01 64 4a 8b 01 c8 c6 21 bb 88 8c 607X..dJ....!... 00:22:29.441 00000160 30 e2 83 3f f0 06 59 6e 34 d3 72 ed 1a e3 76 3c 0..?..Yn4.r...v< 00:22:29.441 00000170 52 6c 17 92 b9 1e 09 04 a1 d4 e3 ac 4b 2f 20 da Rl..........K/ . 00:22:29.441 dh secret: 00:22:29.441 00000000 47 d0 b5 f6 44 27 32 fe b5 57 d2 eb 6c 22 43 69 G...D'2..W..l"Ci 00:22:29.441 00000010 48 9c 99 c6 99 56 86 d5 89 8a 89 c6 bb 03 2b 9e H....V........+. 00:22:29.441 00000020 19 f4 54 81 6a d4 21 79 8a 3e 17 32 99 a4 ac e6 ..T.j.!y.>.2.... 00:22:29.441 00000030 ae 4b 89 28 45 f4 1c 64 77 c9 55 ed 04 15 37 b6 .K.(E..dw.U...7. 00:22:29.441 00000040 11 36 d2 f2 e9 35 33 6b 3a c3 fc c2 bb 87 d7 ca .6...53k:....... 00:22:29.441 00000050 af 8d 21 9f af 80 d3 02 d9 71 97 ee 4f 76 9c 51 ..!......q..Ov.Q 00:22:29.441 00000060 ef 03 f1 17 bc 05 d8 f5 d3 9f 9d 22 0b 98 0f e0 ...........".... 00:22:29.441 00000070 f7 fc 37 c6 6a 5a 39 43 ec fd f0 68 c8 21 64 25 ..7.jZ9C...h.!d% 00:22:29.441 00000080 2c ca 10 90 f5 41 83 92 4e c7 46 18 7d 2a 2d 9a ,....A..N.F.}*-. 00:22:29.441 00000090 0e cb 0c fd 70 15 63 eb b9 a8 cd 3d d0 a5 db a8 ....p.c....=.... 00:22:29.441 000000a0 52 d3 62 47 f0 f6 cb d9 ea 37 16 53 67 84 56 82 R.bG.....7.Sg.V. 00:22:29.441 000000b0 9a 26 bc 8a 6a 7a fa 73 53 80 70 a3 c6 4c 1c fe .&..jz.sS.p..L.. 00:22:29.441 000000c0 96 bd 5f be 76 b4 81 75 52 f1 31 89 7f d7 17 87 .._.v..uR.1..... 00:22:29.441 000000d0 d4 54 ff 4a 56 cb ae f0 d1 c1 0c 4b 26 d4 9c 1b .T.JV......K&... 00:22:29.441 000000e0 92 87 c8 07 88 fb d7 25 dd 38 c4 3e e3 90 26 65 .......%.8.>..&e 00:22:29.441 000000f0 66 f6 af df c0 88 f0 bf 71 6d dc 2d 9d 75 f2 91 f.......qm.-.u.. 00:22:29.441 00000100 08 9a 9c d1 f4 e0 77 87 11 1c b6 27 13 a9 ed da ......w....'.... 00:22:29.441 00000110 7b 00 3b 8a 1b 5d 23 ed 5d 3c 94 26 80 6e 6c 54 {.;..]#.]<.&.nlT 00:22:29.441 00000120 32 95 0b f2 35 b4 4a 67 54 eb 21 7a 6b 4e 1a 07 2...5.JgT.!zkN.. 00:22:29.441 00000130 4e 89 fc 41 32 af c9 6c cb 14 eb cd 6a 4a f1 48 N..A2..l....jJ.H 00:22:29.441 00000140 d8 c6 c1 65 e5 bc cf a1 85 48 c3 ee f7 23 01 43 ...e.....H...#.C 00:22:29.441 00000150 b3 32 af 23 40 8e bc 1b f0 c6 df 27 77 11 c1 35 .2.#@......'w..5 00:22:29.441 00000160 41 df fd 93 8a dd a2 68 b6 18 fa e7 11 ab 15 6d A......h.......m 00:22:29.441 00000170 5d e0 11 94 bd dc 1f 24 4d 1d 36 54 d4 02 ef c0 ]......$M.6T.... 00:22:29.441 [2024-09-27 13:27:01.857847] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=1, dhgroup=2, seq=3775755185, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.441 [2024-09-27 13:27:01.858141] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.441 [2024-09-27 13:27:01.865760] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.441 [2024-09-27 13:27:01.866224] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.441 [2024-09-27 13:27:01.866450] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.441 [2024-09-27 13:27:01.866724] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.441 [2024-09-27 13:27:01.918565] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.441 [2024-09-27 13:27:01.918979] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.441 [2024-09-27 13:27:01.919218] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.441 [2024-09-27 13:27:01.919369] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.441 [2024-09-27 13:27:01.919743] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.441 ctrlr pubkey: 00:22:29.441 00000000 31 fa e9 65 fb 86 a3 07 46 a7 c6 c2 86 f5 03 2d 1..e....F......- 00:22:29.441 00000010 d8 30 19 00 8b da fa 31 be ce 6e 6d 3b 1f 33 bc .0.....1..nm;.3. 00:22:29.441 00000020 dd e5 ed ec 0a 30 d5 16 ad 44 c2 ac c4 ff a6 bf .....0...D...... 00:22:29.441 00000030 53 6f 74 31 7e d9 c8 42 a3 62 cd 47 f8 b0 5a 51 Sot1~..B.b.G..ZQ 00:22:29.441 00000040 cb 3a 52 4e 1d cc 05 12 08 e6 f3 e6 0c 08 1c 92 .:RN............ 00:22:29.441 00000050 90 ed 54 70 96 bf a9 64 64 83 39 c4 13 40 58 1e ..Tp...dd.9..@X. 00:22:29.441 00000060 02 8f e6 3f 4d c0 ac e2 aa 45 2f ba 0d 53 cf 4b ...?M....E/..S.K 00:22:29.441 00000070 10 19 4d d5 12 a0 86 e6 00 30 11 76 b6 60 17 57 ..M......0.v.`.W 00:22:29.441 00000080 97 10 db 9c ef 9b f5 c2 d6 7d 58 f8 85 9d 98 a3 .........}X..... 00:22:29.441 00000090 6c 60 32 82 1a 2d b1 2f fb af 26 7c e6 f4 9c 22 l`2..-./..&|..." 00:22:29.441 000000a0 45 3d 20 d8 8c 50 de 87 dd ba e2 6a b7 ed fd ad E= ..P.....j.... 00:22:29.441 000000b0 7b af 10 e3 66 b5 a3 b2 d3 64 1f be 24 9c 38 af {...f....d..$.8. 00:22:29.441 000000c0 33 55 b9 a2 37 f2 ca bd a6 f7 73 05 1f db 34 ea 3U..7.....s...4. 00:22:29.441 000000d0 c7 4d 16 ed 58 5d f8 12 68 8b b1 c7 e5 31 26 87 .M..X]..h....1&. 00:22:29.441 000000e0 4a f9 9e ed 59 cc 54 a6 1e 91 67 a3 e7 46 81 ea J...Y.T...g..F.. 00:22:29.441 000000f0 23 47 a8 40 3c 8b 3d b4 e6 f4 0d 03 02 76 47 be #G.@<.=......vG. 00:22:29.441 00000100 e6 99 98 8e d4 99 0b 5e d2 e2 43 d3 6c aa 99 82 .......^..C.l... 00:22:29.441 00000110 90 57 55 c6 cd 4d 1c 5c 87 45 4c ca d0 c5 41 ca .WU..M.\.EL...A. 00:22:29.441 00000120 40 1b 4b 2b e5 ac c6 c4 5d c9 7b 3f b7 f4 71 6d @.K+....].{?..qm 00:22:29.441 00000130 5a 56 8e 6f 71 da a3 01 43 39 df 53 e4 6d 1c d7 ZV.oq...C9.S.m.. 00:22:29.441 00000140 c8 47 e1 e8 1f e2 a1 a2 d0 76 68 4a a9 62 8e 90 .G.......vhJ.b.. 00:22:29.441 00000150 29 a1 f0 18 71 78 70 d9 29 20 50 45 c6 c5 5e 87 )...qxp.) PE..^. 00:22:29.441 00000160 ad e2 4e 5b 61 74 e1 df 5f 0d b5 a8 76 c2 74 83 ..N[at.._...v.t. 00:22:29.441 00000170 b2 67 14 72 03 21 5e 83 05 ea dc e7 fc 5e 3c 2a .g.r.!^......^<* 00:22:29.441 host pubkey: 00:22:29.441 00000000 cc b1 24 ff 7e cc 39 15 0d 11 29 5d ce d3 0c 23 ..$.~.9...)]...# 00:22:29.441 00000010 6b 9f dd 31 72 0a 4a d5 bd 3b cb 76 25 ba 8c b2 k..1r.J..;.v%... 00:22:29.441 00000020 61 fa cc 26 c3 51 f3 c1 99 61 35 7b f1 9a 96 f5 a..&.Q...a5{.... 00:22:29.441 00000030 c9 0c a9 05 46 0e 1f e1 b9 30 5f 4c 83 fa 50 8b ....F....0_L..P. 00:22:29.441 00000040 64 c4 5c 88 de 9c ed 12 87 fe 3c 62 fa f0 c7 1c d.\.......X.]>.2i 00:22:29.441 00000160 cf 19 a4 a1 0a cc 5c b9 f5 65 94 9a 9d a9 a2 ef ......\..e...... 00:22:29.441 00000170 07 10 e4 7c 16 65 92 c5 bc 85 01 12 1a d0 20 48 ...|.e........ H 00:22:29.441 dh secret: 00:22:29.441 00000000 73 ce 95 5e c3 65 5b eb 02 c7 a4 8e 03 20 86 fb s..^.e[...... .. 00:22:29.441 00000010 e8 3a a0 a6 9b 08 01 45 d5 ec f3 62 c0 b1 bc a9 .:.....E...b.... 00:22:29.441 00000020 92 50 77 f5 35 4a 1f 94 45 58 3f c7 12 c8 fa 95 .Pw.5J..EX?..... 00:22:29.441 00000030 9c ab 43 dc 92 1e e2 16 d9 e9 4f f4 bf b5 e7 92 ..C.......O..... 00:22:29.441 00000040 3a 76 cb 5b 96 38 d8 8c c4 48 48 ed db 47 f0 05 :v.[.8...HH..G.. 00:22:29.441 00000050 16 a9 56 7e 26 c4 8f f1 c4 a6 de d4 e9 9c 2e 0f ..V~&........... 00:22:29.441 00000060 da 9b 5b 3f a6 5a 02 30 84 5b 18 24 1d d7 75 59 ..[?.Z.0.[.$..uY 00:22:29.441 00000070 38 b7 47 29 91 c5 e9 7d 93 bb 31 52 72 55 72 31 8.G)...}..1RrUr1 00:22:29.441 00000080 29 a7 36 d9 de 50 33 96 1a 42 5e 9c 12 03 49 03 ).6..P3..B^...I. 00:22:29.441 00000090 4b 67 5a 0e 0f bb 37 f0 30 d4 9d fc 6f 60 38 30 KgZ...7.0...o`80 00:22:29.441 000000a0 84 26 bc 14 8f 32 e3 ec 45 ed ea 75 36 5c 21 b2 .&...2..E..u6\!. 00:22:29.441 000000b0 fa 49 51 b8 b3 65 bd 51 97 69 bc 4c 95 bc d3 c2 .IQ..e.Q.i.L.... 00:22:29.441 000000c0 af fe 8b 8c 71 38 b5 bd 53 bf af e0 a8 67 93 0e ....q8..S....g.. 00:22:29.441 000000d0 4c 53 a3 71 43 a1 89 87 ff 5c 04 94 15 71 0e 2e LS.qC....\...q.. 00:22:29.441 000000e0 97 42 12 10 73 90 e0 42 d9 6f 99 bf f2 79 30 50 .B..s..B.o...y0P 00:22:29.441 000000f0 86 f0 26 f0 d0 f0 0c 65 c9 7e c1 5d ac bf 95 de ..&....e.~.].... 00:22:29.441 00000100 49 3c 8b 44 ad c4 a4 00 4d a1 f1 42 2c 67 4a b4 I<.D....M..B,gJ. 00:22:29.441 00000110 76 56 39 ac 3e e3 7c ee 70 49 54 3c a1 85 6b 9f vV9.>.|.pIT<..k. 00:22:29.441 00000120 48 00 42 54 0d 06 ff 7b 0a 90 87 5c 02 06 c9 b6 H.BT...{...\.... 00:22:29.441 00000130 60 c4 d6 63 a3 5d bd 27 f4 cc d2 6f f8 fc 64 64 `..c.].'...o..dd 00:22:29.441 00000140 a2 f9 8d 2a 62 e3 88 23 7f ad ea 42 09 6e 55 c1 ...*b..#...B.nU. 00:22:29.441 00000150 01 d1 71 80 0b cc 7a 2c ae 09 85 87 ad cd 14 0b ..q...z,........ 00:22:29.441 00000160 12 64 a5 1a 4e 63 69 3e b4 f3 17 66 9c af 72 87 .d..Nci>...f..r. 00:22:29.441 00000170 c6 2d 5a a5 89 d7 82 e6 90 3f 6d 04 e0 f6 32 3a .-Z......?m...2: 00:22:29.441 [2024-09-27 13:27:01.935517] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key1, hash=1, dhgroup=2, seq=3775755186, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.441 [2024-09-27 13:27:01.935841] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.441 [2024-09-27 13:27:01.943687] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.441 [2024-09-27 13:27:01.944213] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.442 [2024-09-27 13:27:01.944427] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.442 [2024-09-27 13:27:01.944765] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.442 [2024-09-27 13:27:02.054094] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.442 [2024-09-27 13:27:02.054458] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.442 [2024-09-27 13:27:02.054642] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 2 (ffdhe3072) 00:22:29.442 [2024-09-27 13:27:02.054775] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.442 [2024-09-27 13:27:02.055029] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.442 ctrlr pubkey: 00:22:29.442 00000000 ec 76 21 2a dd 06 14 90 98 33 6d 5c 0b 11 bf 97 .v!*.....3m\.... 00:22:29.442 00000010 64 8d d2 23 7f 73 e7 9c 24 cc 6c 8b 33 bd 84 26 d..#.s..$.l.3..& 00:22:29.442 00000020 8a a8 c5 45 be d3 84 ae 97 20 e1 5c 8a 27 12 3e ...E..... .\.'.> 00:22:29.442 00000030 0f 80 2f 00 7c c8 87 9f 6d 6e 79 79 8b 03 7e cb ../.|...mnyy..~. 00:22:29.442 00000040 88 af a8 92 ca 8d 53 b4 9d 09 d1 97 37 fa 60 f7 ......S.....7.`. 00:22:29.442 00000050 d4 0e b3 df 6d 57 79 2d 3a 9c 1a 72 a8 2c 3f bc ....mWy-:..r.,?. 00:22:29.442 00000060 ec 12 b0 b2 b4 21 3d f5 e0 6c 15 11 94 2d 03 87 .....!=..l...-.. 00:22:29.442 00000070 b9 0e 54 ab f4 a1 d4 7d 6a 8e 81 91 db b3 b3 64 ..T....}j......d 00:22:29.442 00000080 eb 56 38 8c eb 2f 6c 8c 0f 7a 01 52 ce 86 85 e4 .V8../l..z.R.... 00:22:29.442 00000090 1f 6f de 3f 5b a8 12 ea f5 30 5e 5a a3 88 ce 84 .o.?[....0^Z.... 00:22:29.442 000000a0 99 e9 09 12 15 e8 9d 9e 5d ae 8a 8d 27 7b 8a 79 ........]...'{.y 00:22:29.442 000000b0 05 5a c3 f3 ca 0e 53 9a ea 3a ff be b2 ae 51 02 .Z....S..:....Q. 00:22:29.442 000000c0 23 97 11 b0 2d e7 2c f9 20 f8 18 ab 2a 1e 99 60 #...-.,. ...*..` 00:22:29.442 000000d0 41 5a ba 06 4b e7 51 c5 6f a9 e4 e6 a8 ca 4d 20 AZ..K.Q.o.....M 00:22:29.442 000000e0 71 7d 8f 11 f0 53 2e 59 a9 87 89 ae 3b 63 41 0d q}...S.Y....;cA. 00:22:29.442 000000f0 0b d6 95 ae 68 f2 42 64 1d 37 8c 81 cc 83 c9 86 ....h.Bd.7...... 00:22:29.442 00000100 34 55 94 2c 1e 61 77 57 78 00 23 bb 67 41 cb a9 4U.,.awWx.#.gA.. 00:22:29.442 00000110 8f 5e 5a 6a d0 9d 53 88 cf bd f2 5c 61 f0 19 62 .^Zj..S....\a..b 00:22:29.442 00000120 86 16 6c 82 3f a1 a2 96 21 a4 a7 d8 87 ec 99 ba ..l.?...!....... 00:22:29.442 00000130 a1 86 93 3b 50 c3 10 be ae 56 30 a1 22 76 fd 11 ...;P....V0."v.. 00:22:29.442 00000140 25 79 4f a7 90 a5 8f 80 2d bf b1 49 a6 63 ef a5 %yO.....-..I.c.. 00:22:29.442 00000150 e0 e2 f6 5f ba 79 91 aa 49 fa 54 38 09 ce 78 cf ..._.y..I.T8..x. 00:22:29.442 00000160 ca 0a 41 fc 50 52 2b 21 55 46 51 db 88 c8 39 c6 ..A.PR+!UFQ...9. 00:22:29.442 00000170 18 0c 40 3a 66 b5 eb c7 86 33 98 28 8c b0 c9 97 ..@:f....3.(.... 00:22:29.442 host pubkey: 00:22:29.442 00000000 7c fb bc 8c 86 b2 f1 f5 d3 ed 8e 20 ca 7f 98 18 |.......... .... 00:22:29.442 00000010 b3 94 da 48 38 e0 20 75 da 59 7b 24 75 7b 77 8a ...H8. u.Y{$u{w. 00:22:29.442 00000020 07 c5 bf ec 66 9b a0 42 57 41 aa 08 71 e2 ea 91 ....f..BWA..q... 00:22:29.442 00000030 b1 e8 20 83 fd 0c 07 01 b4 df 1b 4c 7c e9 ad d9 .. ........L|... 00:22:29.442 00000040 02 9d 00 71 3a 44 d8 7d eb b6 48 1b d4 ff dd 83 ...q:D.}..H..... 00:22:29.442 00000050 4a 31 3a 65 30 d7 cb 61 fe f6 45 53 66 c6 d2 77 J1:e0..a..ESf..w 00:22:29.442 00000060 2d f1 c3 6e 8f dd 60 00 3b 2d 74 e2 06 6e e2 19 -..n..`.;-t..n.. 00:22:29.442 00000070 cf 6c d3 15 50 d9 5b 19 e3 93 f5 60 d9 13 13 af .l..P.[....`.... 00:22:29.442 00000080 59 66 df 82 10 9d 87 2f 9b cd 56 aa c9 01 81 15 Yf...../..V..... 00:22:29.442 00000090 20 a4 1e 00 f5 b1 74 dc 13 87 bc e4 ae c9 58 6a .....t.......Xj 00:22:29.442 000000a0 b9 45 73 31 44 d2 7a 6c 42 c6 3d f2 28 04 8b c9 .Es1D.zlB.=.(... 00:22:29.442 000000b0 2a 7b 7a 7c 95 39 1d 79 91 fe cf 62 32 58 03 8f *{z|.9.y...b2X.. 00:22:29.442 000000c0 3f e6 6d 9c 9a bb f2 31 d1 9e 2b 69 49 77 96 55 ?.m....1..+iIw.U 00:22:29.442 000000d0 3f b9 1d 6b f0 db cb 39 12 7e 0e 6f e3 20 89 45 ?..k...9.~.o. .E 00:22:29.442 000000e0 17 31 4b d7 90 b8 69 09 fe 07 ca af 97 b1 14 fe .1K...i......... 00:22:29.442 000000f0 f8 70 af 94 49 fb 12 a0 33 dd 1a d0 6e b1 86 f2 .p..I...3...n... 00:22:29.442 00000100 14 a8 bf fa b1 35 8f bb 73 a7 1a a1 54 4b d9 31 .....5..s...TK.1 00:22:29.442 00000110 dc e8 21 f6 aa 0c 5c 3c 6c bb c2 d6 35 0a f8 bc ..!...\.s.3.E4.=k8B 00:22:29.442 00000040 c8 d2 02 0e 59 db 79 78 d3 6e a1 bf a2 3b 6d 4b ....Y.yx.n...;mK 00:22:29.442 00000050 df a0 a8 d5 5d 44 a9 9e 56 7d 8d 3d be dc 8e ff ....]D..V}.=.... 00:22:29.442 00000060 63 3b 51 32 5f 12 03 47 6a 26 43 6c a2 6b ab 4d c;Q2_..Gj&Cl.k.M 00:22:29.442 00000070 db 62 de e8 53 2f 85 89 86 9c 65 63 87 a1 c9 82 .b..S/....ec.... 00:22:29.442 00000080 7b 8c 76 80 76 bb 94 0c 0a d4 e3 88 4e 01 ae 8a {.v.v.......N... 00:22:29.442 00000090 57 c9 25 89 29 91 38 ae 49 74 e8 25 4b 91 05 60 W.%.).8.It.%K..` 00:22:29.442 000000a0 7b 49 45 ad 39 cc 8e c4 db dd 2d d0 3f 94 05 67 {IE.9.....-.?..g 00:22:29.442 000000b0 5b 5a 8f a9 37 a3 ee 96 28 c8 22 c5 5f fa 0f 7b [Z..7...(."._..{ 00:22:29.442 000000c0 89 11 db 6e 53 86 1c 20 23 88 da 33 46 4b 59 fa ...nS.. #..3FKY. 00:22:29.442 000000d0 61 9f 83 cb 7d 66 66 84 74 8e 42 86 be e7 22 48 a...}ff.t.B..."H 00:22:29.442 000000e0 9c 78 b0 52 42 c9 8b c9 95 a2 68 6e df f5 e6 ad .x.RB.....hn.... 00:22:29.442 000000f0 bd e4 04 a3 5e 8c 71 c7 40 4c 21 a3 ec 15 e6 34 ....^.q.@L!....4 00:22:29.442 00000100 78 2a e9 59 28 79 0a 4a 9b a6 82 a4 17 5b 1e b6 x*.Y(y.J.....[.. 00:22:29.442 00000110 3b da 67 0b ac 60 10 4e b1 27 84 83 43 f6 a8 e5 ;.g..`.N.'..C... 00:22:29.442 00000120 e6 1c 12 62 bd 51 5a 62 2d 93 3b f1 4f 48 61 2c ...b.QZb-.;.OHa, 00:22:29.442 00000130 c4 d2 3e 62 74 bb 0b f1 8a e5 ff 14 cd ab 41 e5 ..>bt.........A. 00:22:29.442 00000140 44 e1 e4 fe 24 bc a0 3e 97 f3 3f f3 88 98 3b c4 D...$..>..?...;. 00:22:29.442 00000150 0d f8 0d d2 d5 66 0d f4 30 5e 55 4d 45 36 1d 66 .....f..0^UME6.f 00:22:29.442 00000160 be fc 11 f4 9d 9d ec 9c 3d 78 71 b3 82 67 39 41 ........=xq..g9A 00:22:29.442 00000170 65 29 dc fb 85 69 dd cf e6 70 0d b4 00 d3 d8 73 e)...i...p.....s 00:22:29.442 [2024-09-27 13:27:02.069027] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=1, dhgroup=2, seq=3775755187, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.442 [2024-09-27 13:27:02.069410] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.442 [2024-09-27 13:27:02.077544] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.442 [2024-09-27 13:27:02.077912] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.442 [2024-09-27 13:27:02.078041] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.442 [2024-09-27 13:27:02.078289] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.442 [2024-09-27 13:27:02.130625] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.442 [2024-09-27 13:27:02.130984] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.442 [2024-09-27 13:27:02.131170] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.442 [2024-09-27 13:27:02.131331] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.442 [2024-09-27 13:27:02.131590] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.442 ctrlr pubkey: 00:22:29.442 00000000 ec 76 21 2a dd 06 14 90 98 33 6d 5c 0b 11 bf 97 .v!*.....3m\.... 00:22:29.442 00000010 64 8d d2 23 7f 73 e7 9c 24 cc 6c 8b 33 bd 84 26 d..#.s..$.l.3..& 00:22:29.442 00000020 8a a8 c5 45 be d3 84 ae 97 20 e1 5c 8a 27 12 3e ...E..... .\.'.> 00:22:29.442 00000030 0f 80 2f 00 7c c8 87 9f 6d 6e 79 79 8b 03 7e cb ../.|...mnyy..~. 00:22:29.442 00000040 88 af a8 92 ca 8d 53 b4 9d 09 d1 97 37 fa 60 f7 ......S.....7.`. 00:22:29.442 00000050 d4 0e b3 df 6d 57 79 2d 3a 9c 1a 72 a8 2c 3f bc ....mWy-:..r.,?. 00:22:29.442 00000060 ec 12 b0 b2 b4 21 3d f5 e0 6c 15 11 94 2d 03 87 .....!=..l...-.. 00:22:29.442 00000070 b9 0e 54 ab f4 a1 d4 7d 6a 8e 81 91 db b3 b3 64 ..T....}j......d 00:22:29.442 00000080 eb 56 38 8c eb 2f 6c 8c 0f 7a 01 52 ce 86 85 e4 .V8../l..z.R.... 00:22:29.442 00000090 1f 6f de 3f 5b a8 12 ea f5 30 5e 5a a3 88 ce 84 .o.?[....0^Z.... 00:22:29.442 000000a0 99 e9 09 12 15 e8 9d 9e 5d ae 8a 8d 27 7b 8a 79 ........]...'{.y 00:22:29.442 000000b0 05 5a c3 f3 ca 0e 53 9a ea 3a ff be b2 ae 51 02 .Z....S..:....Q. 00:22:29.442 000000c0 23 97 11 b0 2d e7 2c f9 20 f8 18 ab 2a 1e 99 60 #...-.,. ...*..` 00:22:29.442 000000d0 41 5a ba 06 4b e7 51 c5 6f a9 e4 e6 a8 ca 4d 20 AZ..K.Q.o.....M 00:22:29.442 000000e0 71 7d 8f 11 f0 53 2e 59 a9 87 89 ae 3b 63 41 0d q}...S.Y....;cA. 00:22:29.442 000000f0 0b d6 95 ae 68 f2 42 64 1d 37 8c 81 cc 83 c9 86 ....h.Bd.7...... 00:22:29.442 00000100 34 55 94 2c 1e 61 77 57 78 00 23 bb 67 41 cb a9 4U.,.awWx.#.gA.. 00:22:29.442 00000110 8f 5e 5a 6a d0 9d 53 88 cf bd f2 5c 61 f0 19 62 .^Zj..S....\a..b 00:22:29.442 00000120 86 16 6c 82 3f a1 a2 96 21 a4 a7 d8 87 ec 99 ba ..l.?...!....... 00:22:29.442 00000130 a1 86 93 3b 50 c3 10 be ae 56 30 a1 22 76 fd 11 ...;P....V0."v.. 00:22:29.443 00000140 25 79 4f a7 90 a5 8f 80 2d bf b1 49 a6 63 ef a5 %yO.....-..I.c.. 00:22:29.443 00000150 e0 e2 f6 5f ba 79 91 aa 49 fa 54 38 09 ce 78 cf ..._.y..I.T8..x. 00:22:29.443 00000160 ca 0a 41 fc 50 52 2b 21 55 46 51 db 88 c8 39 c6 ..A.PR+!UFQ...9. 00:22:29.443 00000170 18 0c 40 3a 66 b5 eb c7 86 33 98 28 8c b0 c9 97 ..@:f....3.(.... 00:22:29.443 host pubkey: 00:22:29.443 00000000 ec f4 4a 7f 05 c4 cd 36 26 4b a5 de 04 75 cc 62 ..J....6&K...u.b 00:22:29.443 00000010 e9 57 09 ce 77 55 f4 b1 c9 b7 99 5e f4 29 90 3a .W..wU.....^.).: 00:22:29.443 00000020 aa 9b 35 48 49 cf 9d 03 92 c2 e4 16 b2 70 29 f1 ..5HI........p). 00:22:29.443 00000030 17 b1 56 ad ab bb d6 1f 8f 02 ac da 85 54 b0 49 ..V..........T.I 00:22:29.443 00000040 3f 9b 8a e5 ef ab e4 c2 1b 7b d8 0d 69 4b 70 a7 ?........{..iKp. 00:22:29.443 00000050 1d e1 bb 67 8f 82 16 91 e9 f8 14 47 f1 fd 3d a3 ...g.......G..=. 00:22:29.443 00000060 80 68 a8 54 fb 5e 02 32 b1 1a f9 3b 9b 90 72 b2 .h.T.^.2...;..r. 00:22:29.443 00000070 fc 9d 5c e3 ad e2 b0 b3 a6 78 07 4d 5f d0 dd 78 ..\......x.M_..x 00:22:29.443 00000080 c2 ee 60 c6 58 91 30 c9 4f 4d 5b db 18 9a 2d 3c ..`.X.0.OM[...-< 00:22:29.443 00000090 b7 fb ee f7 7b 09 5d 4a 81 49 58 97 70 35 a5 88 ....{.]J.IX.p5.. 00:22:29.443 000000a0 7c d5 bb 82 62 ee df 8b a7 7f 5b 21 55 01 16 5d |...b.....[!U..] 00:22:29.443 000000b0 41 27 dc d0 41 da 4c 6f 2f 4c ce 2f eb 3e 4a 69 A'..A.Lo/L./.>Ji 00:22:29.443 000000c0 fe 0c 30 4f 5c 83 af 74 1f 75 8b 97 b8 70 6e ab ..0O\..t.u...pn. 00:22:29.443 000000d0 61 fb a0 b4 00 78 90 71 96 49 cb 09 fb 31 a6 96 a....x.q.I...1.. 00:22:29.443 000000e0 f9 09 7e 8a be 56 50 a6 c7 22 13 75 ed 0b 15 80 ..~..VP..".u.... 00:22:29.443 000000f0 54 9b a9 c4 e1 24 32 1a 87 7c 94 7c 31 45 f2 6f T....$2..|.|1E.o 00:22:29.443 00000100 e3 34 24 20 82 c9 38 08 1c e6 95 59 36 6c 69 2e .4$ ..8....Y6li. 00:22:29.443 00000110 db e5 dd a0 28 9a 89 78 8f 58 4f e1 2c 3b ba 62 ....(..x.XO.,;.b 00:22:29.443 00000120 5f eb cb 97 d2 79 fe b9 00 0c e6 ee d5 de 91 1a _....y.......... 00:22:29.443 00000130 20 58 a9 51 3f 4b 19 b8 ff a0 79 43 7b 26 d6 69 X.Q?K....yC{&.i 00:22:29.443 00000140 ff 20 97 f0 6a d9 7e 64 39 02 1d 97 65 cd bb dd . ..j.~d9...e... 00:22:29.443 00000150 dc c5 36 55 8c c7 18 4b 55 cb ce 2e 50 93 58 0a ..6U...KU...P.X. 00:22:29.443 00000160 f5 a0 95 4e 1d 40 b5 fc f4 f2 cc 35 3b e2 c9 52 ...N.@.....5;..R 00:22:29.443 00000170 20 92 29 ba 41 54 3d cb 7f 33 40 81 e4 e9 99 20 .).AT=..3@.... 00:22:29.443 dh secret: 00:22:29.443 00000000 0e 87 06 2a ed a8 90 a4 22 24 4d 9a 75 b0 f0 ad ...*...."$M.u... 00:22:29.443 00000010 8b b2 ff ad d8 ce 05 02 dd 4f 8b 46 71 05 4b e3 .........O.Fq.K. 00:22:29.443 00000020 a5 72 04 09 57 df a0 95 b5 83 2b 0f 5d 15 52 98 .r..W.....+.].R. 00:22:29.443 00000030 d1 ba 56 4e 4d 6c 32 43 a4 0d 83 e6 c8 1d 38 20 ..VNMl2C......8 00:22:29.443 00000040 c2 72 46 70 a3 4a 7b 60 65 68 45 ca 5a 21 51 81 .rFp.J{`ehE.Z!Q. 00:22:29.443 00000050 0f a6 b8 f2 6d b0 7d a1 6d 15 82 14 fd e6 07 b3 ....m.}.m....... 00:22:29.443 00000060 33 ae cd 84 b8 bd eb 27 8b dc 0b 15 65 a5 a1 c1 3......'....e... 00:22:29.443 00000070 3f b6 e2 e9 34 a7 b0 5d 22 6b 89 da 2c 81 3e 20 ?...4..]"k..,.> 00:22:29.443 00000080 b7 9e ea 81 6a 5e 5f eb 05 b9 89 51 12 90 60 47 ....j^_....Q..`G 00:22:29.443 00000090 8d da fc ef 5c aa 21 b1 3c a2 f4 94 f0 85 10 98 ....\.!.<....... 00:22:29.443 000000a0 5b 27 85 8f 93 a9 d6 48 bf 3d 6f 31 4e 9b e0 81 ['.....H.=o1N... 00:22:29.443 000000b0 23 e3 a1 5e 41 82 cb fc 09 36 14 16 25 c3 8a e5 #..^A....6..%... 00:22:29.443 000000c0 54 fb 9a 3f 93 d9 0b b2 19 c9 b7 c0 c6 a8 16 78 T..?...........x 00:22:29.443 000000d0 15 c5 10 03 c4 c0 93 0c f4 09 38 c2 7d d3 3f ce ..........8.}.?. 00:22:29.443 000000e0 0e 2d c6 43 ac 22 6a 4a be e1 d3 ea 6e e1 a0 ba .-.C."jJ....n... 00:22:29.443 000000f0 b3 9c ce 6a 7e ba 62 c4 28 bc 5d d5 ba b3 d5 73 ...j~.b.(.]....s 00:22:29.443 00000100 57 fd a6 3f be ac ca 35 ca b6 50 e0 27 f9 92 63 W..?...5..P.'..c 00:22:29.443 00000110 27 95 38 36 08 dc 84 a3 0f 4d 86 36 06 27 ff 39 '.86.....M.6.'.9 00:22:29.443 00000120 31 35 ca f8 c2 f5 87 73 24 95 73 1c 75 0f 32 b6 15.....s$.s.u.2. 00:22:29.443 00000130 d2 d4 10 2d 98 f0 1f 13 be 39 77 a3 c1 a1 c0 a6 ...-.....9w..... 00:22:29.443 00000140 ae 56 97 94 6e 64 b4 65 6d c3 c0 ad 98 a5 46 93 .V..nd.em.....F. 00:22:29.443 00000150 62 b8 6a ad fc 3c 22 09 73 ec 43 90 2f d1 55 ca b.j..<".s.C./.U. 00:22:29.443 00000160 e9 8f 67 c6 53 e2 4c 95 70 09 53 ae 60 0b 81 68 ..g.S.L.p.S.`..h 00:22:29.443 00000170 d3 c5 db c5 4a bd 8d a0 21 72 0a 96 61 c0 cc a4 ....J...!r..a... 00:22:29.443 [2024-09-27 13:27:02.147557] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key2, hash=1, dhgroup=2, seq=3775755188, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.443 [2024-09-27 13:27:02.147864] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.443 [2024-09-27 13:27:02.157475] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.443 [2024-09-27 13:27:02.157899] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.443 [2024-09-27 13:27:02.158209] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.443 [2024-09-27 13:27:02.158454] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.443 [2024-09-27 13:27:02.256914] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.443 [2024-09-27 13:27:02.257134] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.443 [2024-09-27 13:27:02.257340] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 2 (ffdhe3072) 00:22:29.443 [2024-09-27 13:27:02.257451] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.443 [2024-09-27 13:27:02.257647] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.443 ctrlr pubkey: 00:22:29.443 00000000 18 c7 25 fc 86 65 a7 90 6a 9f c0 14 69 87 2f 94 ..%..e..j...i./. 00:22:29.443 00000010 7b df 48 01 b0 91 9b 46 f5 72 57 d9 98 8f ca f0 {.H....F.rW..... 00:22:29.443 00000020 33 37 7b 33 8c a2 b6 db 41 6f b3 bc 7f 8c e3 a7 37{3....Ao...... 00:22:29.443 00000030 fd 5b 53 0a 7e 13 c6 34 94 85 c6 cb f2 1a a8 22 .[S.~..4......." 00:22:29.443 00000040 7c 25 53 d9 21 14 7d a4 4f 59 32 c2 2e fd 13 b3 |%S.!.}.OY2..... 00:22:29.443 00000050 98 fd c1 3c 27 64 71 5f e6 92 b9 b9 85 49 24 87 ...<'dq_.....I$. 00:22:29.443 00000060 1b ba 1f b4 e8 7b 95 cb f6 89 1b 86 e8 1f 78 97 .....{........x. 00:22:29.443 00000070 4b af da 1a a4 70 43 08 3e 46 55 73 1e 12 27 7f K....pC.>FUs..'. 00:22:29.443 00000080 bd 6c 3a e3 ab 46 1b 4a 91 4a cc f4 74 71 5d 4f .l:..F.J.J..tq]O 00:22:29.443 00000090 87 1e f7 62 4c d9 26 67 c0 50 e8 8d 7f d6 ef b3 ...bL.&g.P...... 00:22:29.443 000000a0 88 fc c2 d2 27 11 5c ac fd 9b e1 4d 76 45 9b be ....'.\....MvE.. 00:22:29.443 000000b0 3a 70 67 3f ee 32 b3 44 93 76 d6 8d fd d6 99 16 :pg?.2.D.v...... 00:22:29.443 000000c0 ec 64 d5 e4 32 86 5a fd 43 5d 3a a4 74 c7 01 cf .d..2.Z.C]:.t... 00:22:29.443 000000d0 15 f4 64 56 35 70 1f 4e 44 25 e7 db 27 fa dd e4 ..dV5p.ND%..'... 00:22:29.443 000000e0 2c 71 93 0a 35 62 6a fe 8f 5e da c2 41 a3 fe 25 ,q..5bj..^..A..% 00:22:29.443 000000f0 44 ec 2d 27 b4 23 c7 4b 6f 10 eb f2 1f d4 8f cd D.-'.#.Ko....... 00:22:29.443 00000100 2d c8 cb 9e 3d e6 d5 cb ac b6 5b 8f 26 21 a7 21 -...=.....[.&!.! 00:22:29.443 00000110 66 f9 e7 44 84 5b b3 c8 d5 90 82 d4 9e 48 12 8f f..D.[.......H.. 00:22:29.443 00000120 80 4e 09 70 27 cd d2 ae ff 1d a6 0a f5 36 b1 94 .N.p'........6.. 00:22:29.443 00000130 a8 9f 9e 52 a6 2e 5c a9 de 20 ae 8b f0 a3 66 89 ...R..\.. ....f. 00:22:29.443 00000140 f2 ce a2 87 d7 c8 9b b4 94 d3 71 64 1f 91 af 69 ..........qd...i 00:22:29.443 00000150 32 a4 dc 4e 71 20 53 01 d6 d1 15 2f bb f5 1c e5 2..Nq S..../.... 00:22:29.443 00000160 64 2f d6 d6 28 ee b8 e7 37 ab 2b 5a f0 7c d0 18 d/..(...7.+Z.|.. 00:22:29.443 00000170 17 14 a3 7d 84 35 61 a5 07 68 a3 5e 0d fb 66 9f ...}.5a..h.^..f. 00:22:29.443 host pubkey: 00:22:29.443 00000000 6b 48 48 75 10 d0 6c c5 f1 2c 85 c9 dc 78 93 11 kHHu..l..,...x.. 00:22:29.443 00000010 ba f4 08 ea 99 a2 2f 35 9a 97 e2 6f 7c 50 0e c1 ....../5...o|P.. 00:22:29.443 00000020 3a 9a e0 73 01 8b 67 95 ba 42 e3 4d f1 09 7b f0 :..s..g..B.M..{. 00:22:29.443 00000030 2b 85 f0 4c 03 e7 7c c9 f6 89 ff a8 9e dc aa 77 +..L..|........w 00:22:29.443 00000040 74 6a 3a f5 90 73 af 11 74 41 4b f6 9b bc a5 bc tj:..s..tAK..... 00:22:29.443 00000050 78 47 82 59 b9 b6 43 ff 54 30 8c 22 97 5d 35 45 xG.Y..C.T0.".]5E 00:22:29.443 00000060 a6 f6 5b 31 26 e2 68 0a f8 73 b6 3a 81 76 99 6b ..[1&.h..s.:.v.k 00:22:29.443 00000070 07 cb 6e ce 00 52 fa ac 4d a5 23 d2 88 33 64 ba ..n..R..M.#..3d. 00:22:29.443 00000080 33 92 5d aa e7 ec a9 f8 ea 7d 9f c3 51 d5 f2 12 3.]......}..Q... 00:22:29.443 00000090 1b 61 90 cb 40 c8 ef dc 4b 27 6d 83 a7 78 89 a0 .a..@...K'm..x.. 00:22:29.443 000000a0 2a a2 3a 20 a3 7e 8b 3c 92 0e 26 4a a3 4a 14 21 *.: .~.<..&J.J.! 00:22:29.443 000000b0 b9 08 0e 8d eb 50 3f f9 15 45 de 1d e7 0b c1 d2 .....P?..E...... 00:22:29.443 000000c0 70 eb 15 f7 e7 dd a6 7f 8f 74 20 03 bc 48 56 07 p........t ..HV. 00:22:29.443 000000d0 d9 03 b0 7f 4a 7d 62 c1 af bd 14 03 42 29 7d c7 ....J}b.....B)}. 00:22:29.443 000000e0 c3 5c 4e 8f 96 e4 88 fc a1 8c e6 33 87 5d d9 bf .\N........3.].. 00:22:29.443 000000f0 0e 3d cf 64 dd 2b 46 bb 61 cb be 2d 49 9f a1 79 .=.d.+F.a..-I..y 00:22:29.443 00000100 ba e2 06 45 4c 2a ff d3 a9 ba 0f 86 a1 68 63 cd ...EL*.......hc. 00:22:29.443 00000110 b4 b7 28 95 17 1d 03 09 52 bc bb af ea 0e da 75 ..(.....R......u 00:22:29.443 00000120 79 7f 32 1a d1 41 2a d4 a1 af c1 5d 9d 20 cc 1a y.2..A*....]. .. 00:22:29.443 00000130 51 23 6e 72 0d c9 b1 89 dc 20 ce fa 38 0a 01 21 Q#nr..... ..8..! 00:22:29.443 00000140 6c ae 0d fc 1b 9f 4a 0e c5 ef 55 b9 79 ec b6 2d l.....J...U.y..- 00:22:29.443 00000150 4f b6 2c 2e 10 28 fe 27 c2 a3 70 57 e4 c3 b4 a8 O.,..(.'..pW.... 00:22:29.443 00000160 09 5b 84 e7 20 ef dc 50 0f c8 27 66 42 5a 36 3d .[.. ..P..'fBZ6= 00:22:29.443 00000170 28 6b 6a 47 d9 e5 72 8f b4 3a 6f 85 15 da 8b ad (kjG..r..:o..... 00:22:29.443 dh secret: 00:22:29.443 00000000 c7 8b 6c 43 fa a0 90 03 98 76 ef e0 3e 66 ed 18 ..lC.....v..>f.. 00:22:29.443 00000010 c2 63 96 a9 ea 1d fc 40 32 ba b9 19 37 3c 16 b6 .c.....@2...7<.. 00:22:29.443 00000020 87 fb 1b 8e 7c c5 3b 7e 46 4b 96 13 61 43 4b fe ....|.;~FK..aCK. 00:22:29.443 00000030 b2 0a 4d 68 b6 2b 02 d1 89 bf d7 25 4b c7 35 91 ..Mh.+.....%K.5. 00:22:29.443 00000040 af 4c 2c c3 58 9b 88 73 8b f6 0b 9e 3f 54 82 6d .L,.X..s....?T.m 00:22:29.443 00000050 02 e1 34 a8 37 4a 3c 61 9b bf b9 9a 1a c1 59 9e ..4.7J6'P.. 00:22:29.443 000000d0 2c e7 a6 50 82 1e 6d 32 98 08 56 97 cb f1 34 b5 ,..P..m2..V...4. 00:22:29.443 000000e0 2f 4b e5 c1 dc a7 25 1e 5f 14 f1 13 40 53 74 e9 /K....%._...@St. 00:22:29.443 000000f0 17 a3 c2 9f 01 b6 86 60 01 35 e5 ef 46 e5 05 3a .......`.5..F..: 00:22:29.443 00000100 c7 38 f8 82 7d 29 e9 95 f8 7d e4 c8 21 7c 6e fc .8..})...}..!|n. 00:22:29.444 00000110 c2 fa 1b 23 50 39 bc 37 ff dd 7e 68 be f8 58 75 ...#P9.7..~h..Xu 00:22:29.444 00000120 4b 30 76 f0 08 18 6f 89 7f 06 8c 86 e3 92 3d 16 K0v...o.......=. 00:22:29.444 00000130 de fe e0 53 32 71 0b 70 54 dd 15 4f 8c 85 39 6e ...S2q.pT..O..9n 00:22:29.444 00000140 59 73 98 aa d4 73 b5 d4 d9 68 b4 5a 03 9d 23 56 Ys...s...h.Z..#V 00:22:29.444 00000150 ed 6b 07 a6 05 59 19 26 3a 34 5a 6e f3 67 11 c0 .k...Y.&:4Zn.g.. 00:22:29.444 00000160 9b a5 32 c0 c0 49 a5 19 cd 4c 33 fc e9 19 d7 62 ..2..I...L3....b 00:22:29.444 00000170 7c e8 5f 09 0d 44 20 32 1d a2 e1 30 56 ee f7 8c |._..D 2...0V... 00:22:29.444 [2024-09-27 13:27:02.272837] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key3, hash=1, dhgroup=2, seq=3775755189, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.444 [2024-09-27 13:27:02.273163] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.444 [2024-09-27 13:27:02.286712] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.444 [2024-09-27 13:27:02.287068] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.444 [2024-09-27 13:27:02.287257] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.444 [2024-09-27 13:27:02.287437] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.444 [2024-09-27 13:27:02.338781] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.444 [2024-09-27 13:27:02.339050] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.444 [2024-09-27 13:27:02.339228] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.444 [2024-09-27 13:27:02.339416] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.444 [2024-09-27 13:27:02.339651] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.444 ctrlr pubkey: 00:22:29.444 00000000 18 c7 25 fc 86 65 a7 90 6a 9f c0 14 69 87 2f 94 ..%..e..j...i./. 00:22:29.444 00000010 7b df 48 01 b0 91 9b 46 f5 72 57 d9 98 8f ca f0 {.H....F.rW..... 00:22:29.444 00000020 33 37 7b 33 8c a2 b6 db 41 6f b3 bc 7f 8c e3 a7 37{3....Ao...... 00:22:29.444 00000030 fd 5b 53 0a 7e 13 c6 34 94 85 c6 cb f2 1a a8 22 .[S.~..4......." 00:22:29.444 00000040 7c 25 53 d9 21 14 7d a4 4f 59 32 c2 2e fd 13 b3 |%S.!.}.OY2..... 00:22:29.444 00000050 98 fd c1 3c 27 64 71 5f e6 92 b9 b9 85 49 24 87 ...<'dq_.....I$. 00:22:29.444 00000060 1b ba 1f b4 e8 7b 95 cb f6 89 1b 86 e8 1f 78 97 .....{........x. 00:22:29.444 00000070 4b af da 1a a4 70 43 08 3e 46 55 73 1e 12 27 7f K....pC.>FUs..'. 00:22:29.444 00000080 bd 6c 3a e3 ab 46 1b 4a 91 4a cc f4 74 71 5d 4f .l:..F.J.J..tq]O 00:22:29.444 00000090 87 1e f7 62 4c d9 26 67 c0 50 e8 8d 7f d6 ef b3 ...bL.&g.P...... 00:22:29.444 000000a0 88 fc c2 d2 27 11 5c ac fd 9b e1 4d 76 45 9b be ....'.\....MvE.. 00:22:29.444 000000b0 3a 70 67 3f ee 32 b3 44 93 76 d6 8d fd d6 99 16 :pg?.2.D.v...... 00:22:29.444 000000c0 ec 64 d5 e4 32 86 5a fd 43 5d 3a a4 74 c7 01 cf .d..2.Z.C]:.t... 00:22:29.444 000000d0 15 f4 64 56 35 70 1f 4e 44 25 e7 db 27 fa dd e4 ..dV5p.ND%..'... 00:22:29.444 000000e0 2c 71 93 0a 35 62 6a fe 8f 5e da c2 41 a3 fe 25 ,q..5bj..^..A..% 00:22:29.444 000000f0 44 ec 2d 27 b4 23 c7 4b 6f 10 eb f2 1f d4 8f cd D.-'.#.Ko....... 00:22:29.444 00000100 2d c8 cb 9e 3d e6 d5 cb ac b6 5b 8f 26 21 a7 21 -...=.....[.&!.! 00:22:29.444 00000110 66 f9 e7 44 84 5b b3 c8 d5 90 82 d4 9e 48 12 8f f..D.[.......H.. 00:22:29.444 00000120 80 4e 09 70 27 cd d2 ae ff 1d a6 0a f5 36 b1 94 .N.p'........6.. 00:22:29.444 00000130 a8 9f 9e 52 a6 2e 5c a9 de 20 ae 8b f0 a3 66 89 ...R..\.. ....f. 00:22:29.444 00000140 f2 ce a2 87 d7 c8 9b b4 94 d3 71 64 1f 91 af 69 ..........qd...i 00:22:29.444 00000150 32 a4 dc 4e 71 20 53 01 d6 d1 15 2f bb f5 1c e5 2..Nq S..../.... 00:22:29.444 00000160 64 2f d6 d6 28 ee b8 e7 37 ab 2b 5a f0 7c d0 18 d/..(...7.+Z.|.. 00:22:29.444 00000170 17 14 a3 7d 84 35 61 a5 07 68 a3 5e 0d fb 66 9f ...}.5a..h.^..f. 00:22:29.444 host pubkey: 00:22:29.444 00000000 f2 c5 d5 59 aa 27 a9 ce a6 f5 a7 dd 1f 95 c2 1a ...Y.'.......... 00:22:29.444 00000010 6f bf cd 39 a4 8b c9 93 2f 54 72 f1 0f bb 0b 2d o..9..../Tr....- 00:22:29.444 00000020 10 15 1d 8e 30 b8 87 c4 ea 8b e1 ca 8c c5 ba 84 ....0........... 00:22:29.444 00000030 48 a8 a9 c6 43 b9 1a 9c 02 1a 99 50 c4 d2 84 58 H...C......P...X 00:22:29.444 00000040 34 da a9 ce 6c de a0 35 e0 ea 2a 8c 34 f1 f0 97 4...l..5..*.4... 00:22:29.444 00000050 bb 5c 9b ac 5f bb ac ba 38 5d 0c ea 0c e8 86 18 .\.._...8]...... 00:22:29.444 00000060 6b 61 2b b7 80 a7 82 ec 81 70 32 ba e7 96 d0 f3 ka+......p2..... 00:22:29.444 00000070 e9 66 70 56 f9 a6 f3 79 96 fc 91 f2 6a 92 5a 87 .fpV...y....j.Z. 00:22:29.444 00000080 3e 0d 44 df 93 c2 ca 67 a5 22 eb 78 47 5a a6 3f >.D....g.".xGZ.? 00:22:29.444 00000090 6a 9b a5 24 c5 b8 bd dc 49 d5 68 b4 a1 fd a2 b5 j..$....I.h..... 00:22:29.444 000000a0 93 5c b6 4d d2 bd 75 e8 6b 2d 6c 41 7c a4 43 6c .\.M..u.k-lA|.Cl 00:22:29.444 000000b0 e7 6d ec df 29 cf ea 78 92 81 75 d6 9d 6f 57 d2 .m..)..x..u..oW. 00:22:29.444 000000c0 56 d2 68 07 67 06 62 eb 32 4a 0a 94 54 94 a6 5d V.h.g.b.2J..T..] 00:22:29.444 000000d0 90 75 04 da 1a 6b 86 41 a0 ee 82 e7 95 56 8a e9 .u...k.A.....V.. 00:22:29.444 000000e0 e4 90 37 7f 64 bc df ec 61 a4 40 5d 8b d6 6e 75 ..7.d...a.@]..nu 00:22:29.444 000000f0 5e 47 29 ac 4f de 40 da 79 8d 6d 22 34 c2 ca 0a ^G).O.@.y.m"4... 00:22:29.444 00000100 73 bb c5 0c 3a 5a 97 61 81 a4 2e b3 5d 8e 87 fc s...:Z.a....]... 00:22:29.444 00000110 a5 17 e4 af 35 4d a3 34 dc 74 7b bc cc 1b f4 19 ....5M.4.t{..... 00:22:29.444 00000120 0c 00 d4 73 2e d9 9c 46 a4 6d 16 87 45 ca f3 16 ...s...F.m..E... 00:22:29.444 00000130 8b 12 2b ef ef d2 c2 e7 cc 34 49 dc 5a dc 55 17 ..+......4I.Z.U. 00:22:29.444 00000140 ed 40 ca e7 14 e1 f8 a8 09 80 c6 4d 7a 23 ac d2 .@.........Mz#.. 00:22:29.444 00000150 68 b6 91 05 12 b8 57 ba a5 7b 75 d2 49 cb 2e 7e h.....W..{u.I..~ 00:22:29.444 00000160 3d 20 8d 3f dd bb 02 77 cb 9d 69 ae c9 60 d4 6b = .?...w..i..`.k 00:22:29.444 00000170 96 90 d3 dc 5b 0c 26 b2 86 af 8a e2 fb 13 f6 88 ....[.&......... 00:22:29.444 dh secret: 00:22:29.444 00000000 d5 ea 7b 70 52 0e 4b ad cf 08 59 d5 ca 13 44 b5 ..{pR.K...Y...D. 00:22:29.444 00000010 11 68 b0 06 62 3f eb 89 ef 9b 2f 5d b2 b0 c4 7d .h..b?..../]...} 00:22:29.444 00000020 22 8e e3 64 c8 16 ce 8c 00 86 21 51 29 f9 83 12 "..d......!Q)... 00:22:29.444 00000030 ef b5 f8 dc 6b 90 cd 26 8b 68 9d d8 f9 a5 fa 11 ....k..&.h...... 00:22:29.444 00000040 2c 46 f2 c6 d9 75 bd e2 89 36 ff 28 f9 f9 e0 07 ,F...u...6.(.... 00:22:29.444 00000050 e1 bc 88 e2 e7 ee d8 94 c4 fe 3d 2d 98 b5 38 f6 ..........=-..8. 00:22:29.444 00000060 e3 ca d4 8c 3b 68 d0 9f 79 64 6f b2 cd 5e 55 e0 ....;h..ydo..^U. 00:22:29.444 00000070 0a 8a 2b a7 d7 e2 d3 e1 17 38 17 81 21 bd 5b 4e ..+......8..!.[N 00:22:29.444 00000080 71 03 0a 2e dd a3 ea 5a c9 4f 28 cd 9d 38 54 44 q......Z.O(..8TD 00:22:29.444 00000090 33 f5 d9 d1 0d a4 cd 61 63 92 1e 63 76 7f 1f e1 3......ac..cv... 00:22:29.444 000000a0 e8 66 eb d2 b2 8e 40 5f e4 9b 9b e8 a0 99 47 0b .f....@_......G. 00:22:29.444 000000b0 d8 c6 21 e0 ec 71 51 ab 9a 29 25 b9 4e 10 ea 43 ..!..qQ..)%.N..C 00:22:29.444 000000c0 a4 0f c2 1a 43 f9 2c 4b 3b 56 3a 1c 76 94 1e e4 ....C.,K;V:.v... 00:22:29.444 000000d0 a8 77 ca 32 8b c0 ee 54 80 bc d8 ef 88 7c ed 60 .w.2...T.....|.` 00:22:29.444 000000e0 05 c6 83 23 ec 1f 6b 45 02 e1 a3 1e c2 54 58 5c ...#..kE.....TX\ 00:22:29.444 000000f0 64 aa 5f 95 37 d9 90 71 fb c3 30 97 77 fb a1 e3 d._.7..q..0.w... 00:22:29.444 00000100 67 f7 00 2e ca b8 7c 74 1f 5e 5e e9 5b f7 4f ec g.....|t.^^.[.O. 00:22:29.444 00000110 b2 25 28 8c f2 73 87 37 34 6e c5 e6 d9 a4 a2 0c .%(..s.74n...... 00:22:29.444 00000120 88 0e 2b 38 57 97 3c 91 e5 3e f8 0f f8 4e 5f cb ..+8W.<..>...N_. 00:22:29.444 00000130 88 29 5e 87 ef 46 c3 64 1e da a0 30 d3 a3 63 a3 .)^..F.d...0..c. 00:22:29.444 00000140 1b 00 c4 53 46 3b 98 b0 4f 48 3f ff a4 c3 c4 b5 ...SF;..OH?..... 00:22:29.444 00000150 75 28 29 e1 95 89 b6 6d bf 38 15 19 56 b7 ee a5 u()....m.8..V... 00:22:29.444 00000160 7d 43 81 9b 5a a1 7d cc ba b4 f8 91 e2 23 9f bc }C..Z.}......#.. 00:22:29.444 00000170 89 b0 f5 e6 4d ec 8d d6 30 e0 f8 e4 03 77 c0 d1 ....M...0....w.. 00:22:29.444 [2024-09-27 13:27:02.354663] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key3, hash=1, dhgroup=2, seq=3775755190, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.444 [2024-09-27 13:27:02.355053] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.444 [2024-09-27 13:27:02.364745] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.444 [2024-09-27 13:27:02.365278] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.444 [2024-09-27 13:27:02.365454] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.444 [2024-09-27 13:27:02.365737] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.444 [2024-09-27 13:27:02.462996] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.444 [2024-09-27 13:27:02.463229] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.444 [2024-09-27 13:27:02.463390] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 2 (ffdhe3072) 00:22:29.444 [2024-09-27 13:27:02.463479] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.444 [2024-09-27 13:27:02.463706] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.444 ctrlr pubkey: 00:22:29.444 00000000 ec b8 23 9c 02 54 91 47 26 bc fd 9e b4 46 61 b8 ..#..T.G&....Fa. 00:22:29.444 00000010 51 9c 90 01 2f f1 0a 29 04 7c 0f 2c 67 f0 97 bd Q.../..).|.,g... 00:22:29.444 00000020 69 c4 59 e7 2a 27 5a 1b b0 d9 dc 6f 7f 99 95 07 i.Y.*'Z....o.... 00:22:29.444 00000030 8e 00 5e 65 35 00 05 1a 92 41 ff ad 4b e7 5f 27 ..^e5....A..K._' 00:22:29.444 00000040 ed e6 49 86 e0 54 5c d8 5b ce 5d e8 1c 3e cc 6b ..I..T\.[.]..>.k 00:22:29.444 00000050 76 83 9d ee 80 e0 7e cd f8 0b f6 fe 53 2d f9 24 v.....~.....S-.$ 00:22:29.444 00000060 8a 66 84 b0 cf b7 61 a5 31 ae 63 a8 83 b6 af 10 .f....a.1.c..... 00:22:29.444 00000070 34 64 db 90 fa b8 98 85 f6 12 81 da 35 0c d6 37 4d..........5..7 00:22:29.444 00000080 3d 1c 21 fb 2a 8c d8 55 15 e1 b8 10 f1 d0 df 4a =.!.*..U.......J 00:22:29.444 00000090 d0 d0 c2 88 42 28 c6 85 e2 ef 4f c6 e8 a7 54 f0 ....B(....O...T. 00:22:29.444 000000a0 0c a7 85 79 31 44 d9 6c 67 8e 00 36 c6 b4 21 95 ...y1D.lg..6..!. 00:22:29.444 000000b0 19 93 82 a7 32 3c ad 71 ec 39 db 7b eb 73 89 e2 ....2<.q.9.{.s.. 00:22:29.444 000000c0 e0 a6 16 e8 17 66 66 5f f7 98 1d 57 99 e0 d4 ec .....ff_...W.... 00:22:29.444 000000d0 83 23 5e 91 67 a7 72 63 65 4d f6 86 70 5a 01 58 .#^.g.rceM..pZ.X 00:22:29.444 000000e0 6b 1b bd 2d fa 1d 81 8e 31 9a df 4b 6d 20 14 11 k..-....1..Km .. 00:22:29.444 000000f0 83 48 86 40 f8 14 8b 6f 60 1d 8b c5 f6 d6 6b 13 .H.@...o`.....k. 00:22:29.444 00000100 ba 22 48 19 bc 9e b6 64 83 86 8d 91 ef bd f5 fb ."H....d........ 00:22:29.444 00000110 5d 17 cc bf de 15 44 07 6f 85 7e 89 39 1b 40 96 ].....D.o.~.9.@. 00:22:29.444 00000120 5e a6 bf bc 7f 3c 10 16 91 9d 24 f2 6c e6 85 d3 ^....<....$.l... 00:22:29.444 00000130 9c 29 30 ef 42 0d a5 6e 1b 80 1d a5 6e 55 f3 a1 .)0.B..n....nU.. 00:22:29.445 00000140 0b 49 fd 1e d2 89 8a 62 b6 6a 2c 6a 0b c9 3b 2d .I.....b.j,j..;- 00:22:29.445 00000150 5c 39 8c e1 3b a4 b1 c8 03 a5 99 4e 0b 12 e1 90 \9..;......N.... 00:22:29.445 00000160 3f a3 fc 5b e6 cf 88 32 3c c1 dd 61 7a 68 1c 17 ?..[...2<..azh.. 00:22:29.445 00000170 1f f8 97 25 f3 87 be be d1 67 14 c7 6a ab 94 77 ...%.....g..j..w 00:22:29.445 host pubkey: 00:22:29.445 00000000 30 02 d3 64 2e da f3 29 fd a4 00 53 9e 04 a8 ff 0..d...)...S.... 00:22:29.445 00000010 59 3d 7f ca 65 84 aa 8a 8d 16 b9 a7 34 88 6d d5 Y=..e.......4.m. 00:22:29.445 00000020 b0 c4 91 85 52 19 f3 3c d8 79 d8 9f a7 8b b3 f6 ....R..<.y...... 00:22:29.445 00000030 0f fe 09 b8 b4 91 5f f0 27 ff e4 9f 3c 38 a4 6e ......_.'...<8.n 00:22:29.445 00000040 d8 52 9e 7d 03 31 64 4a e1 9f 16 93 4f bc b7 25 .R.}.1dJ....O..% 00:22:29.445 00000050 d5 ae e7 0e a2 02 3f a6 4c e2 bc 4a 29 0c 3f da ......?.L..J).?. 00:22:29.445 00000060 6c 93 a2 f5 9d fe a1 b6 c3 f1 d4 79 b1 11 b5 00 l..........y.... 00:22:29.445 00000070 01 17 54 60 ba 85 fb 82 57 52 9b 7e 54 eb 71 ea ..T`....WR.~T.q. 00:22:29.445 00000080 84 f6 d9 bd 44 10 27 7a 3c 25 db 18 fa 60 95 82 ....D.'z<%...`.. 00:22:29.445 00000090 06 7b ab 12 4e 85 4a ce e9 38 bc 1c cd cf d8 54 .{..N.J..8.....T 00:22:29.445 000000a0 21 d6 21 1a 61 18 19 7a 40 be 97 17 5a 87 4c d5 !.!.a..z@...Z.L. 00:22:29.445 000000b0 21 aa 8b 64 fd 06 e0 7d c9 47 bc 29 57 ec 72 07 !..d...}.G.)W.r. 00:22:29.445 000000c0 c0 be b7 b5 69 08 0f ea 66 a5 5f fe 0a 98 c0 a9 ....i...f._..... 00:22:29.445 000000d0 30 c4 d6 24 28 64 05 0e 6f 1e 3c ad de 28 2a 8e 0..$(d..o.<..(*. 00:22:29.445 000000e0 5e ed 9d 9e 0f 27 03 b9 26 5b 13 15 8b 6b 72 69 ^....'..&[...kri 00:22:29.445 000000f0 db dc 01 3e 2d 61 a8 b6 4e e4 d0 f8 77 81 f3 68 ...>-a..N...w..h 00:22:29.445 00000100 30 c1 f0 db c9 be b0 8d 52 74 23 3f 12 a7 0b b9 0.......Rt#?.... 00:22:29.445 00000110 3e 8b cf b5 dc df 78 89 ad 30 5b 03 d6 ae fa 89 >.....x..0[..... 00:22:29.445 00000120 69 d8 23 7b 68 1c f2 37 bf e9 e6 1a 77 7b fb ed i.#{h..7....w{.. 00:22:29.445 00000130 69 79 9c 86 d3 98 3c 6f 88 e4 aa af 16 90 85 7e iy.....k 00:22:29.445 00000050 76 83 9d ee 80 e0 7e cd f8 0b f6 fe 53 2d f9 24 v.....~.....S-.$ 00:22:29.445 00000060 8a 66 84 b0 cf b7 61 a5 31 ae 63 a8 83 b6 af 10 .f....a.1.c..... 00:22:29.445 00000070 34 64 db 90 fa b8 98 85 f6 12 81 da 35 0c d6 37 4d..........5..7 00:22:29.445 00000080 3d 1c 21 fb 2a 8c d8 55 15 e1 b8 10 f1 d0 df 4a =.!.*..U.......J 00:22:29.445 00000090 d0 d0 c2 88 42 28 c6 85 e2 ef 4f c6 e8 a7 54 f0 ....B(....O...T. 00:22:29.445 000000a0 0c a7 85 79 31 44 d9 6c 67 8e 00 36 c6 b4 21 95 ...y1D.lg..6..!. 00:22:29.445 000000b0 19 93 82 a7 32 3c ad 71 ec 39 db 7b eb 73 89 e2 ....2<.q.9.{.s.. 00:22:29.445 000000c0 e0 a6 16 e8 17 66 66 5f f7 98 1d 57 99 e0 d4 ec .....ff_...W.... 00:22:29.445 000000d0 83 23 5e 91 67 a7 72 63 65 4d f6 86 70 5a 01 58 .#^.g.rceM..pZ.X 00:22:29.445 000000e0 6b 1b bd 2d fa 1d 81 8e 31 9a df 4b 6d 20 14 11 k..-....1..Km .. 00:22:29.445 000000f0 83 48 86 40 f8 14 8b 6f 60 1d 8b c5 f6 d6 6b 13 .H.@...o`.....k. 00:22:29.445 00000100 ba 22 48 19 bc 9e b6 64 83 86 8d 91 ef bd f5 fb ."H....d........ 00:22:29.445 00000110 5d 17 cc bf de 15 44 07 6f 85 7e 89 39 1b 40 96 ].....D.o.~.9.@. 00:22:29.445 00000120 5e a6 bf bc 7f 3c 10 16 91 9d 24 f2 6c e6 85 d3 ^....<....$.l... 00:22:29.445 00000130 9c 29 30 ef 42 0d a5 6e 1b 80 1d a5 6e 55 f3 a1 .)0.B..n....nU.. 00:22:29.445 00000140 0b 49 fd 1e d2 89 8a 62 b6 6a 2c 6a 0b c9 3b 2d .I.....b.j,j..;- 00:22:29.445 00000150 5c 39 8c e1 3b a4 b1 c8 03 a5 99 4e 0b 12 e1 90 \9..;......N.... 00:22:29.445 00000160 3f a3 fc 5b e6 cf 88 32 3c c1 dd 61 7a 68 1c 17 ?..[...2<..azh.. 00:22:29.445 00000170 1f f8 97 25 f3 87 be be d1 67 14 c7 6a ab 94 77 ...%.....g..j..w 00:22:29.445 host pubkey: 00:22:29.445 00000000 a5 6f cd 44 2a 30 e9 71 9c 2e 0a 52 93 19 a3 6e .o.D*0.q...R...n 00:22:29.445 00000010 e3 68 d8 37 42 4f 06 43 42 b2 e3 34 91 b3 f4 ac .h.7BO.CB..4.... 00:22:29.445 00000020 82 55 42 b1 b6 fa a6 d9 12 64 b6 0f 09 1f 83 eb .UB......d...... 00:22:29.445 00000030 b3 e6 a6 5f 20 a5 5d 2e 31 2e 63 59 83 2e f1 03 ..._ .].1.cY.... 00:22:29.445 00000040 a6 14 cb a2 7a e7 3d 84 5e b3 0c 01 78 97 1c 95 ....z.=.^...x... 00:22:29.445 00000050 8c 25 f5 fb 6e 63 63 0d 76 b6 cf 44 89 1e 62 e0 .%..ncc.v..D..b. 00:22:29.445 00000060 a7 9d dd d3 63 84 3b 3d 9f 8f e2 e5 81 cd 3a b6 ....c.;=......:. 00:22:29.445 00000070 8a a2 b1 5e 2c f4 df 4f bb 86 92 af 0f b7 78 35 ...^,..O......x5 00:22:29.445 00000080 0d 43 dd 80 f2 19 33 a1 f1 6e 8b d1 82 83 ee 31 .C....3..n.....1 00:22:29.445 00000090 84 2f 07 6a 43 85 a7 b7 16 91 41 5d 80 98 8b 06 ./.jC.....A].... 00:22:29.445 000000a0 88 21 d5 99 e9 3d e1 17 52 1e 30 cc 6f 2d c1 3a .!...=..R.0.o-.: 00:22:29.445 000000b0 66 98 17 93 08 12 04 36 be 73 26 26 95 6b de c7 f......6.s&&.k.. 00:22:29.445 000000c0 c0 67 23 5d 87 88 4d 44 01 1c 1d 0e a4 89 66 cd .g#]..MD......f. 00:22:29.445 000000d0 b0 22 df c1 48 44 08 16 c8 f8 a9 f6 ad 1f d7 6d ."..HD.........m 00:22:29.445 000000e0 66 33 4f 3d 35 fa 59 45 ca f6 47 54 d9 31 90 25 f3O=5.YE..GT.1.% 00:22:29.445 000000f0 eb 00 92 5e 01 f3 a0 51 f0 bf a2 14 56 6f 1e 8b ...^...Q....Vo.. 00:22:29.445 00000100 9d 18 f9 80 f5 98 24 4a 0c 59 70 67 03 7e be 5e ......$J.Ypg.~.^ 00:22:29.445 00000110 7e aa b0 25 ea c1 20 0a 45 d5 f5 63 78 49 a3 6f ~..%.. .E..cxI.o 00:22:29.445 00000120 2b c1 72 b4 e0 aa 0f 0a d3 8d a7 32 34 94 56 34 +.r........24.V4 00:22:29.445 00000130 6c 98 50 4c d9 24 31 9a 8f 90 5e 12 d3 d5 06 06 l.PL.$1...^..... 00:22:29.445 00000140 87 39 9f 4d f5 da bf c5 a0 33 a2 a7 8e 46 b4 35 .9.M.....3...F.5 00:22:29.445 00000150 47 c7 2d f2 85 47 87 e9 54 d8 59 87 c9 b1 72 a0 G.-..G..T.Y...r. 00:22:29.445 00000160 79 dc 6d 96 bb 82 d0 f1 e0 b6 a7 ef 00 52 96 f5 y.m..........R.. 00:22:29.445 00000170 29 45 4a 31 23 58 0d c4 7b ee 5a 95 b1 7c 44 60 )EJ1#X..{.Z..|D` 00:22:29.445 dh secret: 00:22:29.445 00000000 95 a8 95 fa cf ae b2 bc 0a 4b c2 85 2f 0a a0 97 .........K../... 00:22:29.445 00000010 e0 ac d4 d9 b7 95 93 6a bb 49 90 22 e2 0c 25 ff .......j.I."..%. 00:22:29.445 00000020 ff 41 b3 43 fc 9b 27 11 0a 4d 1c 8f 1a 72 a4 5d .A.C..'..M...r.] 00:22:29.445 00000030 31 9a 5d 5b a1 33 4c 9a f4 a8 7e 0e 41 56 8e fa 1.][.3L...~.AV.. 00:22:29.445 00000040 6b c3 de 39 de 32 b6 e3 4c 88 40 15 5b 0d f5 73 k..9.2..L.@.[..s 00:22:29.445 00000050 bd d4 a1 3c 0a 57 f9 cf e1 a5 a5 7f 56 d2 da 1f ...<.W......V... 00:22:29.445 00000060 10 ca a3 fc 19 22 1b 70 de a0 3f 95 c6 c6 64 08 .....".p..?...d. 00:22:29.445 00000070 f9 8f 3d 46 2f 84 f8 bf 89 e2 2e 0e e9 7f b6 ab ..=F/........... 00:22:29.445 00000080 39 5b 03 b3 27 2c 2a 2b dd ee 97 82 a0 9f a1 7b 9[..',*+.......{ 00:22:29.445 00000090 ea cd 26 42 32 6f 62 15 6d f8 ac 9a 7f 32 1e 81 ..&B2ob.m....2.. 00:22:29.445 000000a0 a2 3f 29 a1 ff 04 b1 5a 7e 20 bb c6 1c d7 57 0e .?)....Z~ ....W. 00:22:29.445 000000b0 33 e9 4b 8a fb 95 2b 0b 34 4e 4a 9c 7f 30 15 b6 3.K...+.4NJ..0.. 00:22:29.446 000000c0 96 54 23 bc ca 53 06 b3 75 79 1e 06 02 c5 a6 a9 .T#..S..uy...... 00:22:29.446 000000d0 f3 68 25 68 fb d1 ab 41 73 9b 1a 5e 94 b3 cd 39 .h%h...As..^...9 00:22:29.446 000000e0 8e 62 cb 7a bf de 03 a6 28 9c 85 04 f2 c8 f8 36 .b.z....(......6 00:22:29.446 000000f0 77 7e 90 80 d7 29 13 41 a5 d4 fe ab d5 ed bd 38 w~...).A.......8 00:22:29.446 00000100 34 53 74 4b 8a 3a c0 4c ca 33 2f 6d 6c d1 94 9c 4StK.:.L.3/ml... 00:22:29.446 00000110 c7 35 02 1c 85 61 91 66 1d 8a 30 3d 35 28 a0 a5 .5...a.f..0=5(.. 00:22:29.446 00000120 92 90 c0 a6 2b 91 cf ff 30 35 49 1c 7a 57 f0 27 ....+...05I.zW.' 00:22:29.446 00000130 c5 87 b5 c5 26 18 77 3d cb 5e ec d5 29 81 17 8f ....&.w=.^..)... 00:22:29.446 00000140 0a 5b 37 d9 5e a5 df 48 d1 42 0e aa b4 3d 45 9c .[7.^..H.B...=E. 00:22:29.446 00000150 74 55 a5 71 96 a3 14 ee 3c d2 95 bf 4f bb de 2a tU.q....<...O..* 00:22:29.446 00000160 c9 5c 5e 8e 52 87 64 b3 dc aa e4 a2 68 62 81 fd .\^.R.d.....hb.. 00:22:29.446 00000170 ab 7f ab 0a ef 8e 38 69 da 25 d8 16 eb 3d 54 75 ......8i.%...=Tu 00:22:29.446 [2024-09-27 13:27:02.553179] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key4, hash=1, dhgroup=2, seq=3775755192, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.446 [2024-09-27 13:27:02.553624] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.446 [2024-09-27 13:27:02.561176] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.446 [2024-09-27 13:27:02.561423] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.446 [2024-09-27 13:27:02.561743] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.446 [2024-09-27 13:27:03.330778] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.446 [2024-09-27 13:27:03.331180] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.446 [2024-09-27 13:27:03.331361] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 3 (ffdhe4096) 00:22:29.446 [2024-09-27 13:27:03.331549] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.446 [2024-09-27 13:27:03.331881] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.446 ctrlr pubkey: 00:22:29.446 00000000 04 f2 36 69 b9 24 09 e3 30 2a 5d e6 90 49 0b 23 ..6i.$..0*]..I.# 00:22:29.446 00000010 61 c2 6f d4 2b c7 ce 89 05 c3 ca 01 6f 93 af bd a.o.+.......o... 00:22:29.446 00000020 10 98 d3 72 fc 67 09 ed 2b 69 6e 45 e7 d9 41 6f ...r.g..+inE..Ao 00:22:29.446 00000030 19 ad 03 5d 57 7b 74 47 bc 98 fa 6c 2e 7f 6e c3 ...]W{tG...l..n. 00:22:29.446 00000040 a8 ab ae 3a 3e d1 e8 5f 16 1c 57 b5 7e 8a 93 1c ...:>.._..W.~... 00:22:29.446 00000050 2f e7 74 85 d4 33 d9 37 2a 46 65 df c8 8c e7 e6 /.t..3.7*Fe..... 00:22:29.446 00000060 4a 96 18 5f 67 62 57 f9 a7 74 d2 3c 51 b9 13 67 J.._gbW..t.WLS..^.K....8.. 00:22:29.446 000000d0 fa 42 cb ab bb 71 74 0c 08 19 db a6 12 45 b5 31 .B...qt......E.1 00:22:29.446 000000e0 bd ea 19 87 9f a1 da c8 00 9f f2 7e 67 73 62 cd ...........~gsb. 00:22:29.446 000000f0 16 81 8d 0c 82 ed 33 5a 1e a8 13 7c b5 f2 5e 8f ......3Z...|..^. 00:22:29.446 00000100 cc 59 78 4c c0 58 df 69 c3 59 96 62 c2 e1 a7 60 .YxL.X.i.Y.b...` 00:22:29.446 00000110 e3 8b 76 43 f0 da 7d 95 38 80 da c0 0b c9 bd 57 ..vC..}.8......W 00:22:29.446 00000120 c5 a8 9e 59 bd 64 34 a3 39 4d c8 36 76 fc 68 63 ...Y.d4.9M.6v.hc 00:22:29.446 00000130 5b f2 a0 cf fb f9 de ae 85 0e e3 57 7a ba 43 53 [..........Wz.CS 00:22:29.446 00000140 0a 7f 0f b7 8c 23 2b c6 c8 d1 16 bf c8 7f 08 b9 .....#+......... 00:22:29.446 00000150 5b 23 df 63 5b 8f 5a 1d 15 86 a2 0f b6 c7 2b 27 [#.c[.Z.......+' 00:22:29.446 00000160 c3 d6 5f 1b 29 5d 21 19 9c e5 74 11 ae 2b 3c f6 .._.)]!...t..+<. 00:22:29.446 00000170 81 7d 3c 28 33 db c3 16 5d dc 48 92 17 8c 59 7b .}<(3...].H...Y{ 00:22:29.446 00000180 05 c6 84 ee ae 4a 9b 35 9d 25 94 5e ad 36 61 4d .....J.5.%.^.6aM 00:22:29.446 00000190 55 78 1b d2 ad b9 a6 14 5e 95 b0 a3 83 3c fd 0c Ux......^....<.. 00:22:29.446 000001a0 9c ff 32 8e ef 1e fb a7 a2 db cf c3 2d 10 fe da ..2.........-... 00:22:29.446 000001b0 79 5b a0 7d 2d 9b ec a8 5c b8 e7 ce 2b 25 2e dc y[.}-...\...+%.. 00:22:29.446 000001c0 45 4a 79 d9 2a 4b 93 20 40 07 ce b6 41 c1 81 72 EJy.*K. @...A..r 00:22:29.446 000001d0 6e d6 9b 20 5e 17 3a bd 8a 5e f8 6e 2a 58 32 a0 n.. ^.:..^.n*X2. 00:22:29.446 000001e0 b9 9a d4 de b9 8a 73 fd 2d ee d0 e3 31 df 8b 86 ......s.-...1... 00:22:29.446 000001f0 3d 14 35 bd 2b e5 81 47 4c b8 ba e1 15 90 0a 34 =.5.+..GL......4 00:22:29.446 host pubkey: 00:22:29.446 00000000 e2 c2 89 8e 2f d2 cb ab b1 01 e2 78 64 d3 d5 d7 ..../......xd... 00:22:29.446 00000010 41 2b 02 cf ea 65 c1 dc b9 7c 97 ed 2f 50 80 a7 A+...e...|../P.. 00:22:29.446 00000020 4b 6c 25 3d fa 89 75 20 c9 a8 7d 3c 5b 7b 45 19 Kl%=..u ..}<[{E. 00:22:29.446 00000030 06 bc 61 b3 55 6b a1 5a ed 75 71 06 8a 35 84 1a ..a.Uk.Z.uq..5.. 00:22:29.446 00000040 c2 58 1c 22 91 05 50 26 d7 36 ac 96 5d 08 64 8c .X."..P&.6..].d. 00:22:29.446 00000050 f6 1e 3b 5a 3d 3b 1c 00 26 e6 81 48 6c 23 83 aa ..;Z=;..&..Hl#.. 00:22:29.446 00000060 87 0e 8f 0f b6 a4 1f 5f f7 3b f1 52 a2 77 91 59 ......._.;.R.w.Y 00:22:29.446 00000070 24 7e 93 6c a9 20 8f 67 4c c9 b2 58 d6 4b 17 1a $~.l. .gL..X.K.. 00:22:29.446 00000080 a1 97 da 22 b3 3a 98 1c be 66 0f 79 a4 7b 42 e1 ...".:...f.y.{B. 00:22:29.446 00000090 b5 ac 7a 9c bf 02 29 5b 0d 02 f5 df c4 f7 36 51 ..z...)[......6Q 00:22:29.446 000000a0 f8 4d 80 9e fe d6 db 94 16 27 5d eb 11 cd 08 db .M.......']..... 00:22:29.446 000000b0 82 a8 10 17 c3 6b ad 7c 47 a7 5f 10 ff 47 fe e3 .....k.|G._..G.. 00:22:29.446 000000c0 73 15 58 37 e9 3a ac 4d 44 a8 ed fa 9c 11 29 c4 s.X7.:.MD.....). 00:22:29.446 000000d0 ab 85 9d b8 be 25 49 75 5c 64 97 6e aa b9 9e d0 .....%Iu\d.n.... 00:22:29.446 000000e0 2c 07 55 af e7 f4 51 89 40 d4 13 af 56 a5 85 6d ,.U...Q.@...V..m 00:22:29.446 000000f0 c8 98 8d b6 a0 ee 02 31 8a cd d1 39 74 b9 74 e6 .......1...9t.t. 00:22:29.446 00000100 57 0d b9 40 58 ed 72 f5 85 11 9d a5 6e af a5 77 W..@X.r.....n..w 00:22:29.446 00000110 ea 06 22 e7 02 b1 75 e6 17 68 46 ac f7 b4 f8 40 .."...u..hF....@ 00:22:29.446 00000120 2f e2 2e a7 c2 53 39 30 11 99 c7 9c 19 d2 e3 25 /....S90.......% 00:22:29.446 00000130 a7 02 d0 96 89 66 2c 8f 70 ff da bc bf c0 74 9e .....f,.p.....t. 00:22:29.446 00000140 58 94 60 f6 1b 51 d8 78 ed 0e 07 6b 5f 11 73 97 X.`..Q.x...k_.s. 00:22:29.446 00000150 2b 6e 43 b9 70 13 e8 21 45 1f a6 43 46 3e a2 b5 +nC.p..!E..CF>.. 00:22:29.446 00000160 97 d4 a8 e9 40 68 68 76 1a 35 0a 78 b2 2f 77 c2 ....@hhv.5.x./w. 00:22:29.446 00000170 54 07 6b 25 66 90 38 e7 78 d8 e0 ee da 5a a6 ea T.k%f.8.x....Z.. 00:22:29.446 00000180 f6 de b0 a1 e5 47 a1 af 42 06 49 b4 9e ba cd 15 .....G..B.I..... 00:22:29.446 00000190 9f f4 3d fe 68 5f 72 e8 85 e4 5e e6 e5 0c 49 9d ..=.h_r...^...I. 00:22:29.446 000001a0 48 d5 99 bb 87 c9 f4 04 09 ea 96 ad d3 2b c3 64 H............+.d 00:22:29.446 000001b0 01 2e d5 51 11 82 24 a2 86 3c 84 f9 ca ba 5f 9d ...Q..$..<...._. 00:22:29.446 000001c0 2c 02 f3 65 23 10 47 13 ca 08 d9 12 8f 46 a8 6a ,..e#.G......F.j 00:22:29.446 000001d0 1a 53 21 6b 23 07 32 3a 78 a2 95 7b b1 36 ad b3 .S!k#.2:x..{.6.. 00:22:29.446 000001e0 1a 11 75 79 00 20 30 bc 5c 68 39 fe 55 8b 32 4c ..uy. 0.\h9.U.2L 00:22:29.446 000001f0 a2 d6 84 f9 aa 4e 4c bf 19 03 90 ad b7 29 fa c2 .....NL......).. 00:22:29.446 dh secret: 00:22:29.446 00000000 e7 84 ad 4b 9b a9 a1 11 5b 97 0c 74 b1 69 1e 83 ...K....[..t.i.. 00:22:29.446 00000010 f4 98 9c 92 cd c6 12 65 c1 9f 18 0e 42 37 92 5e .......e....B7.^ 00:22:29.446 00000020 70 77 78 db c2 5b a8 ec 3d d3 e9 b7 f5 30 49 6d pwx..[..=....0Im 00:22:29.446 00000030 0f 1c 27 6b 55 35 2f 15 c9 0f e1 3c d2 7e 3d df ..'kU5/....<.~=. 00:22:29.446 00000040 98 f0 2d e0 89 3e 7e c4 a7 f5 9c 23 9d de 4e 03 ..-..>~....#..N. 00:22:29.446 00000050 48 52 a7 77 1f f7 f2 07 fd 56 ea 9c d4 79 c6 a3 HR.w.....V...y.. 00:22:29.446 00000060 6c ec 24 2b 6e 0f a4 a3 bf f8 a5 de 5b 3e 5b dc l.$+n.......[>[. 00:22:29.446 00000070 b2 cf 0b 73 19 16 43 77 08 83 2a 90 49 a4 74 c4 ...s..Cw..*.I.t. 00:22:29.446 00000080 c8 74 d7 7d 6f 1d 50 01 6c 11 56 6f a7 88 e1 2b .t.}o.P.l.Vo...+ 00:22:29.446 00000090 88 9d d9 d7 99 ee b3 ad 5c ef 90 84 eb 96 f4 75 ........\......u 00:22:29.446 000000a0 89 16 e1 8c 46 5f 44 14 44 22 13 12 16 01 c2 0a ....F_D.D"...... 00:22:29.446 000000b0 68 52 5d 80 6e 04 89 69 c3 91 4e 1f c2 78 77 1b hR].n..i..N..xw. 00:22:29.446 000000c0 e5 fa 62 9f f8 0d ba 00 97 60 a1 91 f9 a6 2b 52 ..b......`....+R 00:22:29.446 000000d0 5e e7 85 1c 19 73 cf 75 a8 8f ad de c6 8b da 9e ^....s.u........ 00:22:29.446 000000e0 40 08 73 e2 d0 5f 0c 2d b7 44 45 80 28 6d 10 63 @.s.._.-.DE.(m.c 00:22:29.446 000000f0 fa c5 e9 54 4e fd a3 6e 17 e8 5d 93 00 5d 29 58 ...TN..n..]..])X 00:22:29.446 00000100 e7 1b e2 7b 58 98 dd 95 fd 5f 37 d4 f0 fa 53 5a ...{X...._7...SZ 00:22:29.446 00000110 8d a0 ba 6f c9 f1 b3 d3 ba 28 98 ae 46 81 10 4d ...o.....(..F..M 00:22:29.446 00000120 04 6d 86 76 51 d2 d6 00 12 0b a9 bd 24 2b 10 f6 .m.vQ.......$+.. 00:22:29.446 00000130 f0 c7 ae d4 30 8b 3a 81 1f d4 bb 50 c4 b9 eb f3 ....0.:....P.... 00:22:29.446 00000140 1b 6b 4e ea ba 64 40 cd f3 5f ac c4 00 b9 9e de .kN..d@.._...... 00:22:29.446 00000150 0a 93 fa f6 46 5d 38 9e 90 74 3c 41 ef bf ea 30 ....F]8..t.._..W.~... 00:22:29.447 00000050 2f e7 74 85 d4 33 d9 37 2a 46 65 df c8 8c e7 e6 /.t..3.7*Fe..... 00:22:29.447 00000060 4a 96 18 5f 67 62 57 f9 a7 74 d2 3c 51 b9 13 67 J.._gbW..t.WLS..^.K....8.. 00:22:29.447 000000d0 fa 42 cb ab bb 71 74 0c 08 19 db a6 12 45 b5 31 .B...qt......E.1 00:22:29.447 000000e0 bd ea 19 87 9f a1 da c8 00 9f f2 7e 67 73 62 cd ...........~gsb. 00:22:29.447 000000f0 16 81 8d 0c 82 ed 33 5a 1e a8 13 7c b5 f2 5e 8f ......3Z...|..^. 00:22:29.447 00000100 cc 59 78 4c c0 58 df 69 c3 59 96 62 c2 e1 a7 60 .YxL.X.i.Y.b...` 00:22:29.447 00000110 e3 8b 76 43 f0 da 7d 95 38 80 da c0 0b c9 bd 57 ..vC..}.8......W 00:22:29.447 00000120 c5 a8 9e 59 bd 64 34 a3 39 4d c8 36 76 fc 68 63 ...Y.d4.9M.6v.hc 00:22:29.447 00000130 5b f2 a0 cf fb f9 de ae 85 0e e3 57 7a ba 43 53 [..........Wz.CS 00:22:29.447 00000140 0a 7f 0f b7 8c 23 2b c6 c8 d1 16 bf c8 7f 08 b9 .....#+......... 00:22:29.447 00000150 5b 23 df 63 5b 8f 5a 1d 15 86 a2 0f b6 c7 2b 27 [#.c[.Z.......+' 00:22:29.447 00000160 c3 d6 5f 1b 29 5d 21 19 9c e5 74 11 ae 2b 3c f6 .._.)]!...t..+<. 00:22:29.447 00000170 81 7d 3c 28 33 db c3 16 5d dc 48 92 17 8c 59 7b .}<(3...].H...Y{ 00:22:29.447 00000180 05 c6 84 ee ae 4a 9b 35 9d 25 94 5e ad 36 61 4d .....J.5.%.^.6aM 00:22:29.447 00000190 55 78 1b d2 ad b9 a6 14 5e 95 b0 a3 83 3c fd 0c Ux......^....<.. 00:22:29.447 000001a0 9c ff 32 8e ef 1e fb a7 a2 db cf c3 2d 10 fe da ..2.........-... 00:22:29.447 000001b0 79 5b a0 7d 2d 9b ec a8 5c b8 e7 ce 2b 25 2e dc y[.}-...\...+%.. 00:22:29.447 000001c0 45 4a 79 d9 2a 4b 93 20 40 07 ce b6 41 c1 81 72 EJy.*K. @...A..r 00:22:29.447 000001d0 6e d6 9b 20 5e 17 3a bd 8a 5e f8 6e 2a 58 32 a0 n.. ^.:..^.n*X2. 00:22:29.447 000001e0 b9 9a d4 de b9 8a 73 fd 2d ee d0 e3 31 df 8b 86 ......s.-...1... 00:22:29.447 000001f0 3d 14 35 bd 2b e5 81 47 4c b8 ba e1 15 90 0a 34 =.5.+..GL......4 00:22:29.447 host pubkey: 00:22:29.447 00000000 15 14 c8 f9 06 a4 d6 ea e0 69 68 a8 27 e2 3c 28 .........ih.'.<( 00:22:29.447 00000010 e3 fc e8 e0 ec ed ba b2 44 c5 7a 2b 39 03 55 18 ........D.z+9.U. 00:22:29.447 00000020 4c e5 1c e1 47 e1 a5 27 50 95 13 a8 b8 f1 9f 5b L...G..'P......[ 00:22:29.447 00000030 75 84 94 9c 3c 82 60 51 3e cf ab 64 f2 66 c5 26 u...<.`Q>..d.f.& 00:22:29.447 00000040 24 d0 17 f5 01 a3 91 1b 10 93 27 ad f5 8c 98 1b $.........'..... 00:22:29.447 00000050 cf b7 ae 26 b0 ab 4b 00 10 df 42 61 6e cb ba 67 ...&..K...Ban..g 00:22:29.447 00000060 10 c0 4b 1f a1 9a 93 e7 fe 04 35 2e 57 10 de 1c ..K.......5.W... 00:22:29.447 00000070 7b 5b 74 f9 97 a4 a8 1f 83 35 b0 1d 34 43 45 02 {[t......5..4CE. 00:22:29.447 00000080 1a 14 4f b6 7c 1b 93 90 27 7e e9 de 32 61 55 b0 ..O.|...'~..2aU. 00:22:29.447 00000090 18 08 fc df 8e ad 93 80 68 9a 52 6c 6d 94 c9 60 ........h.Rlm..` 00:22:29.447 000000a0 66 2b f4 99 89 9e 4b 9f 94 4c 45 d7 6e 7f 82 0b f+....K..LE.n... 00:22:29.447 000000b0 64 2c dc 0a 3f ee 61 80 4b 1a ca 4c b9 9b 5a 61 d,..?.a.K..L..Za 00:22:29.447 000000c0 bc 81 5a 48 e6 04 8b 82 bd a7 5b e2 4e 58 cf ea ..ZH......[.NX.. 00:22:29.447 000000d0 42 0a 60 31 28 b8 cb 45 02 96 e2 c8 02 2c 03 98 B.`1(..E.....,.. 00:22:29.447 000000e0 c4 b3 e2 06 b1 e1 1a d4 ca d8 96 31 0b a0 c7 3a ...........1...: 00:22:29.447 000000f0 16 cd da 07 b9 a4 fb 6f e5 55 22 6f 98 2a e9 b6 .......o.U"o.*.. 00:22:29.447 00000100 d6 1e 3b b3 66 68 cc 3b b8 ba ee 01 88 ef 7b 3b ..;.fh.;......{; 00:22:29.447 00000110 db a0 35 a5 c8 ea 93 6c dc f0 22 54 e8 5e ae 87 ..5....l.."T.^.. 00:22:29.447 00000120 3a 34 dd 37 8a 16 b2 21 ce cb 7d d6 8c e6 66 af :4.7...!..}...f. 00:22:29.447 00000130 10 78 aa 7c 28 76 7a 8c ea 8c 97 d7 5f 22 cb a7 .x.|(vz....._".. 00:22:29.447 00000140 5d 13 f5 5a 33 ff 26 66 c3 6f 24 5e 5d 64 d1 39 ]..Z3.&f.o$^]d.9 00:22:29.447 00000150 a2 bf f8 2c 3f 02 8c 8a f1 ae ad 9d 72 f4 a6 7e ...,?.......r..~ 00:22:29.447 00000160 63 7f 7e cd 41 63 c4 78 90 ca 8b 97 9f 01 08 41 c.~.Ac.x.......A 00:22:29.447 00000170 36 7f 5f 58 eb 64 a3 f2 cb 6c 3a 9b 7a 05 e5 6b 6._X.d...l:.z..k 00:22:29.447 00000180 ad 02 f4 cb 62 e8 45 d1 ac 82 ea fb 30 c9 8a cd ....b.E.....0... 00:22:29.447 00000190 3e 7c 7a 71 cb 46 45 92 bb 34 4e d8 41 30 b0 2b >|zq.FE..4N.A0.+ 00:22:29.447 000001a0 7a 06 b4 01 1a dd 9a d0 17 66 e7 d0 d2 9e 74 8a z........f....t. 00:22:29.447 000001b0 f1 6c 71 c1 41 a0 f5 38 4d ba 39 01 39 b7 dd 5a .lq.A..8M.9.9..Z 00:22:29.447 000001c0 e9 b7 4f e2 a4 33 d6 7f a0 29 7a 57 96 90 13 3f ..O..3...)zW...? 00:22:29.447 000001d0 35 7d 86 90 77 a9 5b 34 04 d9 a7 2d 2d 72 c7 7f 5}..w.[4...--r.. 00:22:29.447 000001e0 3a 27 7a 76 7e fe ff be db 62 fa cc 7c de 17 14 :'zv~....b..|... 00:22:29.447 000001f0 ca 71 e5 ba 1f 9a 3e d7 c9 35 2d 29 a0 26 37 24 .q....>..5-).&7$ 00:22:29.447 dh secret: 00:22:29.447 00000000 a3 4f a6 8b cd 69 d3 dd 9a 9a 41 11 b5 09 41 33 .O...i....A...A3 00:22:29.447 00000010 64 2c f3 70 1b 24 dd c6 f3 1d 1f d5 1c e3 d7 a1 d,.p.$.......... 00:22:29.447 00000020 3b 37 4c 8c 8a 70 ab ba 4f c0 5e e5 13 bb ee 59 ;7L..p..O.^....Y 00:22:29.447 00000030 8c 76 ae 78 9b 00 2e 86 04 79 d1 a3 52 74 17 46 .v.x.....y..Rt.F 00:22:29.447 00000040 fd 0d 43 d7 d5 10 68 3e fb 39 cd df 66 01 b2 f8 ..C...h>.9..f... 00:22:29.447 00000050 3c bf 63 24 52 0b 39 93 30 f0 91 c5 1d 38 fc 70 <.c$R.9.0....8.p 00:22:29.447 00000060 3c fd 94 4c d1 7f b5 b8 f5 51 24 2d 7c 7a a1 ac <..L.....Q$-|z.. 00:22:29.447 00000070 7d 84 2a 81 e2 51 7c c4 1d 0b 64 c6 20 e0 70 f0 }.*..Q|...d. .p. 00:22:29.447 00000080 1a 7f 7a ef 6c 85 28 80 a1 cc 27 e0 60 f2 b7 ee ..z.l.(...'.`... 00:22:29.447 00000090 71 6b ce d6 ce b5 b2 f8 f8 df 9b 95 b4 9f 28 cc qk............(. 00:22:29.447 000000a0 5a 59 be 4f 32 b1 5c 2d e7 a4 10 21 4b 2d ad 1e ZY.O2.\-...!K-.. 00:22:29.447 000000b0 fe de 8a 79 3c 58 0d 46 68 09 97 c6 fd 90 3c 9a ...y....z.....#|... 00:22:29.447 00000180 52 a2 2c a7 f7 1e bf b4 07 d1 02 db ca 55 75 8a R.,..........Uu. 00:22:29.447 00000190 e2 85 11 a1 cc 86 b7 de 25 ff b6 24 00 03 4f 2f ........%..$..O/ 00:22:29.447 000001a0 63 ce a5 d5 dd f2 12 15 5a 5e 0a b1 79 b3 13 59 c.......Z^..y..Y 00:22:29.447 000001b0 e1 de 9b c8 f7 4b c4 f2 d3 33 2d f6 0c 88 e4 e7 .....K...3-..... 00:22:29.447 000001c0 48 a8 7c 85 7c 63 97 16 e6 d3 83 2e e5 dc b4 31 H.|.|c.........1 00:22:29.447 000001d0 ee a8 27 18 92 3b 7d 07 f5 b7 23 93 21 1d e6 94 ..'..;}...#.!... 00:22:29.447 000001e0 c4 6a 08 59 c8 a3 00 59 da bd 4e 48 c9 11 0e a9 .j.Y...Y..NH.... 00:22:29.447 000001f0 52 00 bb 72 65 8c 86 e9 a6 26 e3 62 fd dc 8e a3 R..re....&.b.... 00:22:29.447 [2024-09-27 13:27:03.470447] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key0, hash=1, dhgroup=3, seq=3775755194, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.447 [2024-09-27 13:27:03.470875] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.447 [2024-09-27 13:27:03.495335] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.447 [2024-09-27 13:27:03.495786] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.447 [2024-09-27 13:27:03.495981] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.447 [2024-09-27 13:27:03.496204] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.447 [2024-09-27 13:27:03.617021] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.448 [2024-09-27 13:27:03.617278] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.448 [2024-09-27 13:27:03.617472] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 3 (ffdhe4096) 00:22:29.448 [2024-09-27 13:27:03.617616] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.448 [2024-09-27 13:27:03.618089] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.448 ctrlr pubkey: 00:22:29.448 00000000 5a 75 5d f2 32 8c cb b9 cf 17 f5 93 11 6d 1d 73 Zu].2........m.s 00:22:29.448 00000010 fc b2 8d c9 dd 01 39 de 0e 0e ea c1 e1 18 c7 75 ......9........u 00:22:29.448 00000020 73 b8 84 51 7a 64 59 22 42 79 23 87 e5 be c0 fc s..QzdY"By#..... 00:22:29.448 00000030 66 be 05 da c7 18 c2 15 92 a5 35 27 7a b8 71 e9 f.........5'z.q. 00:22:29.448 00000040 c6 2c e5 fa 58 e2 8a b0 eb bf 18 bb d5 17 33 67 .,..X.........3g 00:22:29.448 00000050 44 1e 27 bf fe 23 af 6f ad 64 f7 a2 54 b2 70 5a D.'..#.o.d..T.pZ 00:22:29.448 00000060 60 c3 b5 39 22 e8 70 92 5f c6 b3 63 1b 20 ee 6c `..9".p._..c. .l 00:22:29.448 00000070 25 07 58 84 64 c4 6a 22 28 2b 5b 54 62 a3 a8 f7 %.X.d.j"(+[Tb... 00:22:29.448 00000080 4b 06 13 b3 fc 91 45 11 72 23 b3 de b2 bb 2e 0b K.....E.r#...... 00:22:29.448 00000090 64 9f 49 e8 0c 71 e8 74 ce 85 3c 50 80 c0 6d 18 d.I..q.t......\a//`.?... 00:22:29.448 00000140 b5 e5 f8 8a dc 2a 74 2a 77 e7 f0 76 cc f5 9f 2c .....*t*w..v..., 00:22:29.448 00000150 fc d7 41 46 3e eb 93 c4 13 e0 9c 9a 36 81 78 83 ..AF>.......6.x. 00:22:29.448 00000160 cb 8b f5 64 3d 0c 6d f6 dd 23 07 83 4c 89 c3 5e ...d=.m..#..L..^ 00:22:29.448 00000170 71 4d c1 cd 5c d5 ed 90 ed 8b b2 f2 56 97 ec 11 qM..\.......V... 00:22:29.448 00000180 10 72 2e 78 94 c5 8e f7 4c af 09 61 d3 d2 54 4e .r.x....L..a..TN 00:22:29.448 00000190 5a 8b df 54 38 ac f1 ab 97 f5 e4 a1 b9 11 36 f0 Z..T8.........6. 00:22:29.448 000001a0 6b 01 c4 62 43 77 a3 b2 25 55 e8 f1 91 eb 19 02 k..bCw..%U...... 00:22:29.448 000001b0 81 d9 05 b3 5d 02 d6 70 a6 f9 53 36 69 43 6f 8b ....]..p..S6iCo. 00:22:29.448 000001c0 c0 5a 35 af 78 df 4c cd 07 3b 3b 54 c6 0d 86 60 .Z5.x.L..;;T...` 00:22:29.448 000001d0 72 11 11 6d 14 56 21 30 75 2e 25 b2 6a a9 0c 9e r..m.V!0u.%.j... 00:22:29.448 000001e0 7b 06 cc a2 df 77 b6 ed a2 50 46 ad 51 a6 e5 27 {....w...PF.Q..' 00:22:29.448 000001f0 ee 2e 6f fa 59 30 72 7c c5 76 30 1a d1 66 63 72 ..o.Y0r|.v0..fcr 00:22:29.448 host pubkey: 00:22:29.448 00000000 4a 93 f4 39 bc 2b b4 62 0e d7 1e 12 66 34 13 e6 J..9.+.b....f4.. 00:22:29.448 00000010 d2 ae 2a d0 a8 70 cc 4b 92 7a 51 13 ee 94 0f 3f ..*..p.K.zQ....? 00:22:29.448 00000020 d9 ad 1a 94 92 4d 23 e3 17 d8 84 5c 34 7b 05 1b .....M#....\4{.. 00:22:29.448 00000030 3a 78 b6 ee a6 e2 ad 8b 5f 72 bd 6d c2 a3 c4 70 :x......_r.m...p 00:22:29.448 00000040 d2 cc d7 4f d6 7a 7d 37 26 e2 d3 fc 71 14 53 ef ...O.z}7&...q.S. 00:22:29.448 00000050 e6 ee 25 c8 8c b3 10 85 66 6f e9 5a af dd ce 50 ..%.....fo.Z...P 00:22:29.448 00000060 f1 a9 98 72 3c 77 99 e7 8d af 14 ea ba cc 86 30 ...r:..8/. 00:22:29.448 000000b0 55 a5 87 ad 22 ce 32 87 a5 c5 2a 1d 7e 23 50 09 U...".2...*.~#P. 00:22:29.448 000000c0 5b 5d e0 46 66 e7 1b 7f 5b 8a 05 24 1b 33 31 53 [].Ff...[..$.31S 00:22:29.448 000000d0 35 bd 3f ca 80 f6 b4 06 b9 23 a2 af eb 51 b1 1b 5.?......#...Q.. 00:22:29.448 000000e0 71 c7 98 d8 bb ad 6e 0f 2c e2 e7 5b 77 dc e7 a7 q.....n.,..[w... 00:22:29.448 000000f0 af 9a bb 22 db 00 3a d8 0d d0 ef 23 52 f0 1b 0d ..."..:....#R... 00:22:29.448 00000100 57 d4 87 33 2a 33 e8 e3 30 fa 8c d5 63 2d 21 4b W..3*3..0...c-!K 00:22:29.448 00000110 c9 8f 19 7d f1 a8 64 f1 d3 c8 90 3a 8d 93 e8 64 ...}..d....:...d 00:22:29.448 00000120 01 81 cc 76 ed b3 f0 d2 35 9f 36 c1 c1 00 59 f5 ...v....5.6...Y. 00:22:29.448 00000130 a3 9f 56 37 07 b1 04 af b7 ec c8 23 b9 d3 7c ca ..V7.......#..|. 00:22:29.448 00000140 1f 75 04 b8 89 56 f0 fe e7 3d 7e d2 c2 02 89 7f .u...V...=~..... 00:22:29.448 00000150 49 6c 10 ca 41 87 33 49 c3 91 a9 b6 f3 24 48 99 Il..A.3I.....$H. 00:22:29.448 00000160 75 aa 81 25 81 69 fb 27 49 f4 44 bd 82 de aa 5e u..%.i.'I.D....^ 00:22:29.448 00000170 41 41 25 1f 7a 2c 7d 4d 89 be c0 2f 1f 55 48 8a AA%.z,}M.../.UH. 00:22:29.448 00000180 eb 05 06 2d d5 6c d6 26 d8 f6 20 05 32 67 1a 45 ...-.l.&.. .2g.E 00:22:29.448 00000190 ad 69 25 72 df 2a 51 74 16 b9 bc 95 06 4f 6b 5e .i%r.*Qt.....Ok^ 00:22:29.448 000001a0 bf 85 ef b4 c5 cf 68 d5 ef 9b a3 01 e8 bc 11 40 ......h........@ 00:22:29.448 000001b0 12 8b f8 79 71 ce 4b ea 2a 73 eb c3 9e 8c c8 af ...yq.K.*s...... 00:22:29.448 000001c0 63 e8 46 e0 73 fb 15 5e 2e 52 fb 04 23 75 66 d0 c.F.s..^.R..#uf. 00:22:29.448 000001d0 ab c0 bf 47 a9 89 fc 69 f2 a3 fa ae ad 8e a5 a6 ...G...i........ 00:22:29.448 000001e0 ac d1 32 de 27 11 eb 41 75 5f 95 b5 05 07 f8 e3 ..2.'..Au_...... 00:22:29.448 000001f0 69 e5 6c 78 bb 64 e2 76 74 40 4a c2 2c 7b dc 93 i.lx.d.vt@J.,{.. 00:22:29.448 dh secret: 00:22:29.448 00000000 64 b7 16 7e a5 40 50 5b 35 f8 a0 8e 61 0a ff 48 d..~.@P[5...a..H 00:22:29.448 00000010 0e 79 e2 3e 77 4d 94 03 f4 be 55 ca 54 6c bd 77 .y.>wM....U.Tl.w 00:22:29.448 00000020 49 58 06 0a e0 07 17 3c 8b e2 1d 33 bd 00 81 dd IX.....<...3.... 00:22:29.448 00000030 87 07 71 c8 ab a2 ea a6 06 0d 17 d1 f5 cb 89 0e ..q............. 00:22:29.448 00000040 d2 02 a5 e1 c8 21 6c a9 50 3d 71 db 89 82 7f 54 .....!l.P=q....T 00:22:29.448 00000050 d8 74 37 fa 8d e2 01 6c a7 58 b0 4e 8d 76 9e f3 .t7....l.X.N.v.. 00:22:29.448 00000060 32 27 d6 ae de 82 87 67 d5 b5 fc f1 45 bd 58 3f 2'.....g....E.X? 00:22:29.448 00000070 15 6d 73 29 10 19 d9 f5 c2 b5 21 f7 c1 f0 37 41 .ms)......!...7A 00:22:29.448 00000080 4d 3c f4 39 62 28 a2 b5 e9 6a 3c 1a 11 a9 e5 d9 M<.9b(...j<..... 00:22:29.448 00000090 db c2 ca fc 84 bb 51 6e 01 58 2c f8 73 cb 21 72 ......Qn.X,.s.!r 00:22:29.448 000000a0 49 40 84 dc b3 1b 6a 70 16 36 0e 91 8a d4 43 1e I@....jp.6....C. 00:22:29.448 000000b0 17 7e 75 71 6f 74 f2 29 6f 55 e6 14 48 46 5c d5 .~uqot.)oU..HF\. 00:22:29.448 000000c0 7e 70 c2 21 8f c6 da 18 32 26 4d d5 e0 9f a2 e8 ~p.!....2&M..... 00:22:29.448 000000d0 d8 b0 01 bf a7 7a 9f 2b c8 58 c7 55 45 e6 c5 d1 .....z.+.X.UE... 00:22:29.448 000000e0 bc 69 0e 2e d0 a8 e2 28 20 68 37 86 de 43 fb ef .i.....( h7..C.. 00:22:29.448 000000f0 ac 4b 91 8a 83 bc ef 14 1f 44 51 44 b6 83 dd c2 .K.......DQD.... 00:22:29.448 00000100 5e b9 7a 80 98 db fe 23 87 e6 56 a3 cc 55 c2 0e ^.z....#..V..U.. 00:22:29.448 00000110 48 e5 88 28 ee 53 a3 41 c0 b2 d3 d3 c4 e1 47 c4 H..(.S.A......G. 00:22:29.448 00000120 4c ca 0a 5a 6a b6 4e e9 fd a5 fb a9 db ad cc 14 L..Zj.N......... 00:22:29.448 00000130 74 ad dd a6 68 67 d0 cd 63 d9 62 b6 6b b2 73 fa t...hg..c.b.k.s. 00:22:29.448 00000140 53 31 7f 21 8c e4 6a fb 9c d7 ed a0 f6 a6 8c 95 S1.!..j......... 00:22:29.448 00000150 bb bf b7 d0 80 4e 71 ab de 0c 99 91 83 3b 70 b3 .....Nq......;p. 00:22:29.448 00000160 1d 02 6b d8 8e 27 91 55 c9 11 0a 42 00 24 6c b7 ..k..'.U...B.$l. 00:22:29.448 00000170 52 c2 56 12 fc 04 f7 e6 1a f1 d0 f1 ac cb 6a 4c R.V...........jL 00:22:29.448 00000180 86 16 09 3e 2e f4 0d 45 b7 42 3a b1 9f 17 25 26 ...>...E.B:...%& 00:22:29.448 00000190 56 09 89 4e 82 c0 f9 af 1e 9d 77 7b 30 c6 a7 50 V..N......w{0..P 00:22:29.448 000001a0 9d 5b 96 30 a7 3a 6a 6d 2f 20 ad 75 8b bc d1 6b .[.0.:jm/ .u...k 00:22:29.448 000001b0 d5 3c 5e ac 9e e8 ea dd 80 d4 68 a1 20 a3 bd 45 .<^.......h. ..E 00:22:29.448 000001c0 0b 8e 66 d0 8a b9 e5 1f e1 a0 0a 43 03 a2 e7 81 ..f........C.... 00:22:29.448 000001d0 4d 02 aa b3 dd 97 dc 77 af 43 a1 2d bd 23 92 32 M......w.C.-.#.2 00:22:29.448 000001e0 38 83 df f3 03 55 64 ae 70 a3 7a ca 0d 58 74 d9 8....Ud.p.z..Xt. 00:22:29.448 000001f0 70 9e 89 95 1e 77 f3 e6 3d bf c7 d5 fc 42 93 eb p....w..=....B.. 00:22:29.448 [2024-09-27 13:27:03.647553] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=1, dhgroup=3, seq=3775755195, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.448 [2024-09-27 13:27:03.648044] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.448 [2024-09-27 13:27:03.673731] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.448 [2024-09-27 13:27:03.674065] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.448 [2024-09-27 13:27:03.674409] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.448 [2024-09-27 13:27:03.674708] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.448 [2024-09-27 13:27:03.726526] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.448 [2024-09-27 13:27:03.726840] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.448 [2024-09-27 13:27:03.727080] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 3 (ffdhe4096) 00:22:29.448 [2024-09-27 13:27:03.727239] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.448 [2024-09-27 13:27:03.727477] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.448 ctrlr pubkey: 00:22:29.448 00000000 5a 75 5d f2 32 8c cb b9 cf 17 f5 93 11 6d 1d 73 Zu].2........m.s 00:22:29.448 00000010 fc b2 8d c9 dd 01 39 de 0e 0e ea c1 e1 18 c7 75 ......9........u 00:22:29.448 00000020 73 b8 84 51 7a 64 59 22 42 79 23 87 e5 be c0 fc s..QzdY"By#..... 00:22:29.448 00000030 66 be 05 da c7 18 c2 15 92 a5 35 27 7a b8 71 e9 f.........5'z.q. 00:22:29.448 00000040 c6 2c e5 fa 58 e2 8a b0 eb bf 18 bb d5 17 33 67 .,..X.........3g 00:22:29.448 00000050 44 1e 27 bf fe 23 af 6f ad 64 f7 a2 54 b2 70 5a D.'..#.o.d..T.pZ 00:22:29.448 00000060 60 c3 b5 39 22 e8 70 92 5f c6 b3 63 1b 20 ee 6c `..9".p._..c. .l 00:22:29.448 00000070 25 07 58 84 64 c4 6a 22 28 2b 5b 54 62 a3 a8 f7 %.X.d.j"(+[Tb... 00:22:29.448 00000080 4b 06 13 b3 fc 91 45 11 72 23 b3 de b2 bb 2e 0b K.....E.r#...... 00:22:29.448 00000090 64 9f 49 e8 0c 71 e8 74 ce 85 3c 50 80 c0 6d 18 d.I..q.t......\a//`.?... 00:22:29.449 00000140 b5 e5 f8 8a dc 2a 74 2a 77 e7 f0 76 cc f5 9f 2c .....*t*w..v..., 00:22:29.449 00000150 fc d7 41 46 3e eb 93 c4 13 e0 9c 9a 36 81 78 83 ..AF>.......6.x. 00:22:29.449 00000160 cb 8b f5 64 3d 0c 6d f6 dd 23 07 83 4c 89 c3 5e ...d=.m..#..L..^ 00:22:29.449 00000170 71 4d c1 cd 5c d5 ed 90 ed 8b b2 f2 56 97 ec 11 qM..\.......V... 00:22:29.449 00000180 10 72 2e 78 94 c5 8e f7 4c af 09 61 d3 d2 54 4e .r.x....L..a..TN 00:22:29.449 00000190 5a 8b df 54 38 ac f1 ab 97 f5 e4 a1 b9 11 36 f0 Z..T8.........6. 00:22:29.449 000001a0 6b 01 c4 62 43 77 a3 b2 25 55 e8 f1 91 eb 19 02 k..bCw..%U...... 00:22:29.449 000001b0 81 d9 05 b3 5d 02 d6 70 a6 f9 53 36 69 43 6f 8b ....]..p..S6iCo. 00:22:29.449 000001c0 c0 5a 35 af 78 df 4c cd 07 3b 3b 54 c6 0d 86 60 .Z5.x.L..;;T...` 00:22:29.449 000001d0 72 11 11 6d 14 56 21 30 75 2e 25 b2 6a a9 0c 9e r..m.V!0u.%.j... 00:22:29.449 000001e0 7b 06 cc a2 df 77 b6 ed a2 50 46 ad 51 a6 e5 27 {....w...PF.Q..' 00:22:29.449 000001f0 ee 2e 6f fa 59 30 72 7c c5 76 30 1a d1 66 63 72 ..o.Y0r|.v0..fcr 00:22:29.449 host pubkey: 00:22:29.449 00000000 41 60 f6 ad 3d 96 dd cf 81 a2 3c 69 0d 19 1a 25 A`..=.......?l|e/..)... 00:22:29.449 00000020 9c 8a 0d c2 31 23 f2 f0 60 e5 3d b1 46 25 89 92 ....1#..`.=.F%.. 00:22:29.449 00000030 76 4b 48 7b 37 74 ff 02 27 f8 a8 a7 3d 9f fd 69 vKH{7t..'...=..i 00:22:29.449 00000040 bb 8c 45 fb bb b6 c9 5c cc 22 da 08 da 5d 90 3b ..E....\."...].; 00:22:29.449 00000050 e5 22 a7 2d 0c 08 5c 2b 77 a6 f6 a9 43 c5 3b 09 .".-..\+w...C.;. 00:22:29.449 00000060 e9 cc 63 da 46 77 ad a1 9e 90 08 9c 18 b4 00 31 ..c.Fw.........1 00:22:29.449 00000070 79 77 51 6d 21 e4 dd 0c 06 53 33 38 75 d2 ef 01 ywQm!....S38u... 00:22:29.449 00000080 0d 79 70 8b a3 0c 0d 02 16 3e dd 38 9c 5b c5 0a .yp......>.8.[.. 00:22:29.449 00000090 a5 fa 7d 3d 45 31 37 af 3c 98 56 a4 26 d5 e8 88 ..}=E17.<.V.&... 00:22:29.449 000000a0 a2 40 f6 c2 92 f0 5c ec 63 5b 3a 38 d2 25 d7 2b .@....\.c[:8.%.+ 00:22:29.449 000000b0 81 ca 24 bc dc 12 f2 ad 4a 0f c4 ec 83 25 69 85 ..$.....J....%i. 00:22:29.449 000000c0 31 9a 2c 0d 1b a9 b2 9a 88 5f f0 c3 a9 f1 e4 58 1.,......_.....X 00:22:29.449 000000d0 99 f5 d4 bb a7 0a dc ce aa e1 ea 49 e8 11 bf 69 ...........I...i 00:22:29.449 000000e0 36 59 99 1e ce 76 0d 61 76 b9 aa 14 3d ab 46 40 6Y...v.av...=.F@ 00:22:29.449 000000f0 68 7e 28 ed 27 b3 1f a5 53 5e 4b 5b b4 16 6a 5b h~(.'...S^K[..j[ 00:22:29.449 00000100 19 52 0e 2e 6d 3c f3 9c e5 70 2f ee 3a 14 c3 a6 .R..m<...p/.:... 00:22:29.449 00000110 ad 18 29 f5 c2 d3 f5 b3 96 37 9d 3b cc 6a b3 7d ..)......7.;.j.} 00:22:29.449 00000120 93 f0 b2 4d 11 6b f0 45 7f 37 0f ee 02 8a 7c 43 ...M.k.E.7....|C 00:22:29.449 00000130 1f d8 39 e7 d1 cc b0 c3 4d 94 9f 10 73 63 4b 48 ..9.....M...scKH 00:22:29.449 00000140 df 52 9a 4e 1b 83 58 cf f6 9e c7 f1 96 a7 b3 7b .R.N..X........{ 00:22:29.449 00000150 09 54 3f 59 ff 2d 33 68 11 c0 1f 35 4b d8 ac b8 .T?Y.-3h...5K... 00:22:29.449 00000160 ed 46 21 73 0f ca 8d d2 65 93 12 c0 32 29 b8 31 .F!s....e...2).1 00:22:29.449 00000170 ed ae 86 24 91 23 ee 80 4f fa ef 40 cd f0 ea fb ...$.#..O..@.... 00:22:29.449 00000180 ef 2e c0 e2 97 41 d8 5a 62 cf 6d 3e 5d 9a 2a d0 .....A.Zb.m>].*. 00:22:29.449 00000190 4d e9 58 82 f5 0d f7 f9 9a 3b 61 23 74 fd 24 77 M.X......;a#t.$w 00:22:29.449 000001a0 d2 d8 ec 06 0c 15 e4 83 27 33 dd 27 0f e0 19 2b ........'3.'...+ 00:22:29.449 000001b0 3d fe 46 29 8e 08 19 48 b6 8f a0 0e 05 b1 62 87 =.F)...H......b. 00:22:29.449 000001c0 53 5d 1c 56 df 38 0c 10 1b 8c 93 69 c8 e8 32 2e S].V.8.....i..2. 00:22:29.449 000001d0 fe ac 0a cd 26 d3 e7 b2 0b 8c b1 e7 50 f0 65 5a ....&.......P.eZ 00:22:29.449 000001e0 36 a8 dd c2 9b c4 dd 5d 80 13 4e 57 49 0f 59 f2 6......]..NWI.Y. 00:22:29.449 000001f0 73 51 13 58 ae 3d ef c7 d7 2c 5f 45 2c 53 76 72 sQ.X.=...,_E,Svr 00:22:29.449 dh secret: 00:22:29.449 00000000 1a 22 37 26 a9 33 78 cc 8f e4 ea 2b 5e 60 96 f9 ."7&.3x....+^`.. 00:22:29.449 00000010 7a 4e bd a4 10 a6 fc d4 a3 d6 9e 75 6f 3b b0 56 zN.........uo;.V 00:22:29.449 00000020 80 e6 a0 f0 4a d8 71 55 54 30 8c 7d 42 ec 26 6a ....J.qUT0.}B.&j 00:22:29.449 00000030 75 48 92 c5 6d c4 38 d0 7c 85 57 24 1c aa 27 d8 uH..m.8.|.W$..'. 00:22:29.449 00000040 3f 46 be b0 5e 77 c6 24 ec 68 ae 71 ff 4d ed 0e ?F..^w.$.h.q.M.. 00:22:29.449 00000050 15 d0 71 e5 ef 43 d4 6b 2f ef 05 05 b4 71 a7 56 ..q..C.k/....q.V 00:22:29.449 00000060 ea e3 76 6e 6c 89 59 1b 5e 8c 6c dc 3a 40 fa b4 ..vnl.Y.^.l.:@.. 00:22:29.449 00000070 95 84 b0 46 ac 23 b6 15 41 dc d6 e9 33 33 62 47 ...F.#..A...33bG 00:22:29.449 00000080 d1 14 b7 8f f8 09 06 34 7e cb 35 df fe 79 86 6d .......4~.5..y.m 00:22:29.449 00000090 f4 33 3a 79 7d a6 06 3d 17 fd 72 61 f2 62 b1 29 .3:y}..=..ra.b.) 00:22:29.449 000000a0 1d 60 11 6c 7c d7 55 e9 83 0a 4b 48 00 5d 8e b7 .`.l|.U...KH.].. 00:22:29.449 000000b0 e4 38 c2 ee 56 90 c5 66 00 84 4e ab 89 20 04 29 .8..V..f..N.. .) 00:22:29.449 000000c0 0e c0 18 cb 1a 7e ff 57 93 b4 7e f9 09 67 8f 0f .....~.W..~..g.. 00:22:29.449 000000d0 f9 e6 b2 50 70 34 f2 1c 07 ba bf 9f 9b 8a 8b 5d ...Pp4.........] 00:22:29.449 000000e0 97 80 e6 9a 6f 00 d0 1e 48 f0 20 f2 ee 7d 45 09 ....o...H. ..}E. 00:22:29.449 000000f0 b0 99 aa 1b a9 20 83 d7 61 eb 83 9c bf 27 33 ea ..... ..a....'3. 00:22:29.449 00000100 77 e2 a6 7a 3d ec 5d b0 59 a5 03 1d 5a 1c 65 fa w..z=.].Y...Z.e. 00:22:29.449 00000110 55 53 cb f1 3a ea 06 c3 3d bf 15 9f b0 6f 08 ed US..:...=....o.. 00:22:29.449 00000120 05 48 4a 73 94 2b 3c 70 25 a6 3a a8 6a c2 ab 7d .HJs.+j\/hS.3G 00:22:29.449 000001d0 e3 bf 65 fe 1f e1 d0 78 b9 5d 8a 79 c6 f3 26 6e ..e....x.].y..&n 00:22:29.449 000001e0 38 4f 9b 59 1a 10 9d 85 9f 3c 58 39 d1 83 cc 12 8O.Y.....X 00:22:29.449 00000100 ca 50 23 0c ae ba c8 b8 ff f4 96 d5 2a 09 78 f9 .P#.........*.x. 00:22:29.449 00000110 60 fb cf 2b 22 56 d4 22 eb 74 d7 da 7c 53 e1 8f `..+"V.".t..|S.. 00:22:29.449 00000120 c8 6e 66 31 1f 15 6b da 80 b0 7e 79 0c 63 ef 99 .nf1..k...~y.c.. 00:22:29.449 00000130 c2 70 bc d3 fd d5 2a 2b 08 ea f6 2d d7 30 4e 1b .p....*+...-.0N. 00:22:29.449 00000140 f7 3c 7d 7e 40 af 0c 45 9e d2 39 30 77 52 74 5d .<}~@..E..90wRt] 00:22:29.449 00000150 e8 27 8d 02 fd df 3e 90 b8 87 83 93 55 14 a6 9d .'....>.....U... 00:22:29.449 00000160 03 6a 5e dd d6 9f d3 40 4f e0 03 a4 87 24 b6 67 .j^....@O....$.g 00:22:29.449 00000170 32 79 9e c7 62 a8 78 6f 68 79 11 96 5e 91 8e e6 2y..b.xohy..^... 00:22:29.449 00000180 cf a4 ff ae 6a 57 16 95 fb df 7b 93 52 fd 48 a9 ....jW....{.R.H. 00:22:29.449 00000190 51 c9 c2 e5 e4 d8 09 72 b1 71 a9 4f bb 91 7d 00 Q......r.q.O..}. 00:22:29.449 000001a0 47 88 ee 19 d6 7f d0 1b 81 df f2 ea 16 19 4a 71 G.............Jq 00:22:29.449 000001b0 3d 96 92 fd f0 6c d5 2e 37 b9 2c 75 a8 b3 77 6e =....l..7.,u..wn 00:22:29.449 000001c0 0f df d2 19 50 1d 59 c4 d6 32 9d 40 88 3d a7 52 ....P.Y..2.@.=.R 00:22:29.449 000001d0 6b 72 62 00 31 a2 fd 2b 21 70 07 ec 7e 90 46 a5 krb.1..+!p..~.F. 00:22:29.449 000001e0 0a 0f 95 5c bc 0c 44 1c 77 b7 5e cf 16 76 0d d8 ...\..D.w.^..v.. 00:22:29.449 000001f0 ce 62 22 10 7d 65 a6 c4 e9 e9 23 51 e5 93 d0 38 .b".}e....#Q...8 00:22:29.449 host pubkey: 00:22:29.449 00000000 02 32 5c f5 80 9c 99 d1 3b d2 30 77 8a be 97 99 .2\.....;.0w.... 00:22:29.449 00000010 ee b7 1f 33 ca 8e 02 09 de 1d e4 c5 15 32 cb 26 ...3.........2.& 00:22:29.449 00000020 6c f4 98 57 e5 32 ba 36 d4 57 38 a4 3d 5f 59 24 l..W.2.6.W8.=_Y$ 00:22:29.449 00000030 c2 21 12 2f b6 21 7a b1 2e ae 1c bb 83 a7 59 8e .!./.!z.......Y. 00:22:29.449 00000040 ed cc a8 7c e9 7a 22 64 4d 7f ed 73 52 80 9e b7 ...|.z"dM..sR... 00:22:29.449 00000050 0a b4 53 fa a9 ca 0e 5d aa 80 be f6 cc d4 4f 77 ..S....]......Ow 00:22:29.449 00000060 67 2f 45 03 82 20 e1 6e 3d 15 ae 5b a4 55 45 cd g/E.. .n=..[.UE. 00:22:29.450 00000070 aa 57 5b c8 ff 3e 1a 3b 8f 26 85 b9 25 bb 94 50 .W[..>.;.&..%..P 00:22:29.450 00000080 83 9e b2 f7 d4 12 da 61 24 72 1d 90 4a cf d8 a1 .......a$r..J... 00:22:29.450 00000090 ee 9b ab 47 a8 1c 58 53 f0 84 e9 25 00 41 76 58 ...G..XS...%.AvX 00:22:29.450 000000a0 67 fb 66 b3 12 2c 66 64 a1 7a 2d 37 2f 22 f1 b9 g.f..,fd.z-7/".. 00:22:29.450 000000b0 39 7a 0e 43 05 7d 84 ca 77 7c 15 ed 0f c0 48 b6 9z.C.}..w|....H. 00:22:29.450 000000c0 bd 2b 5b 16 ab fa 8c a1 4b 21 2d ee 65 88 3c 9c .+[.....K!-.e.<. 00:22:29.450 000000d0 08 3b c0 53 1f 89 17 07 9f 1d 55 4c ce c0 cd 83 .;.S......UL.... 00:22:29.450 000000e0 e5 43 08 9d d3 15 19 0f bf 77 b3 ad 4c a7 0c e6 .C.......w..L... 00:22:29.450 000000f0 c1 2e 83 93 72 04 69 6e dc d1 13 45 be 7e c9 e7 ....r.in...E.~.. 00:22:29.450 00000100 60 dc 92 b5 f4 14 c0 98 56 fb d5 cf 39 aa 20 53 `.......V...9. S 00:22:29.450 00000110 54 8e e3 58 2c 15 5b 13 9b de aa fd f6 27 29 65 T..X,.[......')e 00:22:29.450 00000120 67 a5 86 91 4e d0 eb d5 1d 61 b4 7e e5 a3 27 d6 g...N....a.~..'. 00:22:29.450 00000130 96 7d db 39 b5 d9 0a b3 51 85 4e a7 a8 45 03 f3 .}.9....Q.N..E.. 00:22:29.450 00000140 10 a6 c1 5b ac 78 56 1d 8c 44 18 db a1 67 b7 bd ...[.xV..D...g.. 00:22:29.450 00000150 2f 76 d8 11 d0 05 27 cc 64 45 de 40 fc c5 99 ba /v....'.dE.@.... 00:22:29.450 00000160 93 e7 d7 29 5b 99 8b 32 d9 df 5b 6f f2 44 12 09 ...)[..2..[o.D.. 00:22:29.450 00000170 65 fb c8 65 11 4b 0a 12 0f 61 58 37 e2 ba 58 9b e..e.K...aX7..X. 00:22:29.450 00000180 b1 18 33 d9 48 55 42 91 b4 a0 6f 7a 75 3d 5e 9d ..3.HUB...ozu=^. 00:22:29.450 00000190 88 12 39 a8 56 5e 2a 82 2a 36 e4 1a a7 73 91 a5 ..9.V^*.*6...s.. 00:22:29.450 000001a0 1f c6 7b 90 7f 46 fb 96 97 e7 15 7c 1e 86 78 57 ..{..F.....|..xW 00:22:29.450 000001b0 f7 2e 58 1c e6 7a fc 69 2a 6e e5 ce e5 0f d9 53 ..X..z.i*n.....S 00:22:29.450 000001c0 c0 39 38 b3 5a 2d 7d 50 d7 35 ab 3b c2 48 d8 57 .98.Z-}P.5.;.H.W 00:22:29.450 000001d0 47 14 a0 24 77 bb d4 b1 10 11 4f 59 fb 93 62 79 G..$w.....OY..by 00:22:29.450 000001e0 aa 56 a2 18 a0 0a 91 1d f7 5b ab 14 71 d8 7c ba .V.......[..q.|. 00:22:29.450 000001f0 03 e2 52 37 3c c7 a0 d8 f4 79 df db c0 d4 1c dd ..R7<....y...... 00:22:29.450 dh secret: 00:22:29.450 00000000 92 8b 8e 89 e8 aa 59 4c e2 47 30 38 8c 2a 56 bb ......YL.G08.*V. 00:22:29.450 00000010 00 d4 da cd c0 95 ab 07 52 81 3e df 95 f5 25 ad ........R.>...%. 00:22:29.450 00000020 d4 c6 f5 0f 9c df df d5 0d 85 f1 f8 31 48 4a 7b ............1HJ{ 00:22:29.450 00000030 d8 ac ff ef 22 ed c3 a0 ac 08 82 b6 5b b8 58 84 ....".......[.X. 00:22:29.450 00000040 6c 0e d1 3c 2a fd 56 92 41 dc 18 6c e4 7d b2 19 l..<*.V.A..l.}.. 00:22:29.450 00000050 f7 ed 5c cc 70 64 c5 bc 86 c5 09 ad 0f 1e d7 51 ..\.pd.........Q 00:22:29.450 00000060 bb 68 9c c9 40 47 fb f6 61 cd 63 6f e7 ae b4 aa .h..@G..a.co.... 00:22:29.450 00000070 58 0f c6 32 2d a1 bd 6b 26 54 41 70 d7 15 b5 8c X..2-..k&TAp.... 00:22:29.450 00000080 53 e3 76 bb f9 c7 26 ac 40 49 10 4f 54 5b 00 34 S.v...&.@I.OT[.4 00:22:29.450 00000090 15 c5 e7 12 52 dc 65 84 39 14 71 b8 38 67 16 37 ....R.e.9.q.8g.7 00:22:29.450 000000a0 75 47 7a c0 ba bc 6e 06 c1 84 35 2f b1 c3 af 33 uGz...n...5/...3 00:22:29.450 000000b0 c2 c7 04 b9 98 0d 75 79 70 dc 3f 51 b9 89 2a 26 ......uyp.?Q..*& 00:22:29.450 000000c0 27 19 2b cc a6 00 9b 55 a5 a8 95 df 36 16 b7 98 '.+....U....6... 00:22:29.450 000000d0 ab c6 db 6e 24 f1 fd 9a b9 49 9d 2d 7a 65 12 9a ...n$....I.-ze.. 00:22:29.450 000000e0 be b5 e7 d9 40 45 cb 5d ac cf 57 be ee 21 b8 98 ....@E.]..W..!.. 00:22:29.450 000000f0 0c d1 13 fe 39 db af 4d 56 80 0a 93 19 33 22 ac ....9..MV....3". 00:22:29.450 00000100 a1 2a a4 99 61 20 1f 4f 9a 49 78 a3 ce 74 5b 67 .*..a .O.Ix..t[g 00:22:29.450 00000110 75 cf ef 64 d1 bc c4 92 e2 43 fd 92 95 e2 05 7e u..d.....C.....~ 00:22:29.450 00000120 c5 93 f5 22 88 2c 5e 4d 51 5f e1 d5 bc 94 35 6b ...".,^MQ_....5k 00:22:29.450 00000130 de 9e 8f a6 f2 a4 c8 48 13 03 1c bd 7c 52 fb f5 .......H....|R.. 00:22:29.450 00000140 1f 65 fb 26 c5 bc 7f 29 8c a3 6d a9 55 f2 c2 69 .e.&...)..m.U..i 00:22:29.450 00000150 dd 0e 9b 51 99 e0 84 01 16 10 9f af a5 27 bf 65 ...Q.........'.e 00:22:29.450 00000160 02 fb 07 2f 10 25 e2 0d 12 bc 40 26 b3 bc b4 a8 .../.%....@&.... 00:22:29.450 00000170 7c 02 ef f0 4e ba b7 6e ae e8 24 f9 b1 35 bf 74 |...N..n..$..5.t 00:22:29.450 00000180 fe 61 cd 07 a1 91 a1 9e 57 29 2d 87 97 3e cb 9b .a......W)-..>.. 00:22:29.450 00000190 dc 6b 34 cd d1 f2 84 fa 24 92 ca 1d 33 b0 9a be .k4.....$...3... 00:22:29.450 000001a0 c6 06 e8 24 15 8d 00 0c d8 c0 28 b4 23 c0 96 b3 ...$......(.#... 00:22:29.450 000001b0 01 97 d8 ca 77 b0 bd 6b 51 90 9f ad f8 9b 1d 4d ....w..kQ......M 00:22:29.450 000001c0 bc f8 7e 91 81 e1 0d 00 a0 25 b7 64 5d 07 32 fb ..~......%.d].2. 00:22:29.450 000001d0 5e d5 fd ed c2 4e b9 27 b1 06 02 8e 11 bb 02 b4 ^....N.'........ 00:22:29.450 000001e0 3e 32 c5 46 2f 67 93 05 18 0f 41 d7 3a 28 2a 58 >2.F/g....A.:(*X 00:22:29.450 000001f0 1a 6c f7 20 d5 39 b4 c5 57 78 32 67 ec d3 65 bc .l. .9..Wx2g..e. 00:22:29.450 [2024-09-27 13:27:03.923937] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=1, dhgroup=3, seq=3775755197, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.450 [2024-09-27 13:27:03.924222] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.450 [2024-09-27 13:27:03.948728] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.450 [2024-09-27 13:27:03.948965] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.450 [2024-09-27 13:27:03.949221] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.450 [2024-09-27 13:27:03.949482] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.450 [2024-09-27 13:27:04.001414] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.450 [2024-09-27 13:27:04.001652] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.450 [2024-09-27 13:27:04.001959] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 3 (ffdhe4096) 00:22:29.450 [2024-09-27 13:27:04.002108] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.450 [2024-09-27 13:27:04.002432] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.450 ctrlr pubkey: 00:22:29.450 00000000 71 82 a2 22 43 b1 c0 49 e9 65 52 5a 15 46 b4 ae q.."C..I.eRZ.F.. 00:22:29.450 00000010 27 8c 34 5f 74 2a 7b 09 26 21 1f 26 c5 a2 39 a2 '.4_t*{.&!.&..9. 00:22:29.450 00000020 25 73 30 15 4e 3c 84 6d 84 6a fd 19 3b 56 c4 b7 %s0.N<.m.j..;V.. 00:22:29.450 00000030 50 a7 51 f1 3c 5d 83 cf 87 bb a3 03 e9 39 41 42 P.Q.<].......9AB 00:22:29.450 00000040 55 78 1a d9 1f 35 25 76 fe e2 d3 f2 9e 70 67 55 Ux...5%v.....pgU 00:22:29.450 00000050 70 3a ca ea 55 b6 ea dd 44 bb 03 a0 7c 1a 1c 03 p:..U...D...|... 00:22:29.450 00000060 4f 7a d6 3c bd e2 6e a7 78 ad 92 a0 1e 36 41 13 Oz.<..n.x....6A. 00:22:29.450 00000070 32 ed 8f 30 d2 0e 9e 9b c6 46 1a 83 0d e1 b9 2c 2..0.....F....., 00:22:29.450 00000080 58 54 80 5f 86 d9 2d 89 fa 37 a3 92 52 10 89 9e XT._..-..7..R... 00:22:29.450 00000090 c5 b9 0e 1f b1 50 5c 39 86 b0 86 ca 3a 57 78 9f .....P\9....:Wx. 00:22:29.450 000000a0 fa 41 5a a9 9e 0c fc a6 82 26 63 1a 3b 59 94 6e .AZ......&c.;Y.n 00:22:29.450 000000b0 c8 14 ef ab 59 df 38 46 a9 9c 22 bf f2 b8 dc 56 ....Y.8F.."....V 00:22:29.450 000000c0 e1 e1 ed d4 8f b4 5a ad 76 c4 a0 bd 64 5f 79 05 ......Z.v...d_y. 00:22:29.450 000000d0 b3 bf 00 6a b0 de 44 11 05 1b 0d ff 22 b3 8c 5c ...j..D....."..\ 00:22:29.450 000000e0 d4 82 2c 4a cf 2e 2b ab 8f 95 80 ff 63 d3 f7 38 ..,J..+.....c..8 00:22:29.450 000000f0 18 9a 2a fa bc c7 b0 7f ad f8 24 a8 26 ec 3e 58 ..*.......$.&.>X 00:22:29.450 00000100 ca 50 23 0c ae ba c8 b8 ff f4 96 d5 2a 09 78 f9 .P#.........*.x. 00:22:29.450 00000110 60 fb cf 2b 22 56 d4 22 eb 74 d7 da 7c 53 e1 8f `..+"V.".t..|S.. 00:22:29.450 00000120 c8 6e 66 31 1f 15 6b da 80 b0 7e 79 0c 63 ef 99 .nf1..k...~y.c.. 00:22:29.450 00000130 c2 70 bc d3 fd d5 2a 2b 08 ea f6 2d d7 30 4e 1b .p....*+...-.0N. 00:22:29.450 00000140 f7 3c 7d 7e 40 af 0c 45 9e d2 39 30 77 52 74 5d .<}~@..E..90wRt] 00:22:29.450 00000150 e8 27 8d 02 fd df 3e 90 b8 87 83 93 55 14 a6 9d .'....>.....U... 00:22:29.450 00000160 03 6a 5e dd d6 9f d3 40 4f e0 03 a4 87 24 b6 67 .j^....@O....$.g 00:22:29.450 00000170 32 79 9e c7 62 a8 78 6f 68 79 11 96 5e 91 8e e6 2y..b.xohy..^... 00:22:29.450 00000180 cf a4 ff ae 6a 57 16 95 fb df 7b 93 52 fd 48 a9 ....jW....{.R.H. 00:22:29.450 00000190 51 c9 c2 e5 e4 d8 09 72 b1 71 a9 4f bb 91 7d 00 Q......r.q.O..}. 00:22:29.450 000001a0 47 88 ee 19 d6 7f d0 1b 81 df f2 ea 16 19 4a 71 G.............Jq 00:22:29.450 000001b0 3d 96 92 fd f0 6c d5 2e 37 b9 2c 75 a8 b3 77 6e =....l..7.,u..wn 00:22:29.450 000001c0 0f df d2 19 50 1d 59 c4 d6 32 9d 40 88 3d a7 52 ....P.Y..2.@.=.R 00:22:29.450 000001d0 6b 72 62 00 31 a2 fd 2b 21 70 07 ec 7e 90 46 a5 krb.1..+!p..~.F. 00:22:29.450 000001e0 0a 0f 95 5c bc 0c 44 1c 77 b7 5e cf 16 76 0d d8 ...\..D.w.^..v.. 00:22:29.450 000001f0 ce 62 22 10 7d 65 a6 c4 e9 e9 23 51 e5 93 d0 38 .b".}e....#Q...8 00:22:29.450 host pubkey: 00:22:29.450 00000000 ed 11 f8 82 4a 20 35 89 de cd 48 1b 5d 1f c4 b0 ....J 5...H.]... 00:22:29.450 00000010 4f 2b 81 2e 2e d7 e9 9a 9c 2e c6 c2 89 61 3b e2 O+...........a;. 00:22:29.450 00000020 49 cb 72 33 95 c8 69 66 7e 84 fd b3 11 0a 8e a0 I.r3..if~....... 00:22:29.450 00000030 3c 31 a9 18 e6 be 7d 7c a3 5b 9e 12 c4 28 82 d8 <1....}|.[...(.. 00:22:29.450 00000040 bd 96 05 3e f6 3f 9f 2d 40 c5 c9 9a 4c 05 63 93 ...>.?.-@...L.c. 00:22:29.450 00000050 f2 e9 2e 17 0d 80 5f f7 d0 a0 d1 84 95 26 27 e6 ......_......&'. 00:22:29.450 00000060 2e a9 ac 6a 5f 73 ed e4 5f 53 3c 0f 6f 49 a9 84 ...j_s.._S<.oI.. 00:22:29.450 00000070 14 8b 06 d5 09 fe 3e f8 36 85 00 66 d8 c1 fe d0 ......>.6..f.... 00:22:29.450 00000080 98 09 70 c2 59 a7 05 26 7b f1 c8 cd f0 09 b9 12 ..p.Y..&{....... 00:22:29.450 00000090 20 5d d6 36 cb 00 e7 ac e3 b0 7b a1 f4 00 f9 6e ].6......{....n 00:22:29.450 000000a0 8c 06 2c 42 9c f1 3d b2 c6 88 74 8e af a0 7e ca ..,B..=...t...~. 00:22:29.450 000000b0 96 4b 7d 48 42 00 d3 33 50 62 67 5e 27 78 f6 6e .K}HB..3Pbg^'x.n 00:22:29.450 000000c0 c6 bd 2e fa a2 79 80 cc 9c ac 74 86 87 c1 0e 72 .....y....t....r 00:22:29.450 000000d0 ce 48 1c 87 d0 b0 fb 05 ae 7d 0e ff c0 d4 e5 f1 .H.......}...... 00:22:29.450 000000e0 be ea df f7 9a dd 82 0d c2 0a 0f 98 d3 d9 8b cc ................ 00:22:29.450 000000f0 8d de cb a5 e8 7f 2e e0 f8 2b 79 00 97 da 79 91 .........+y...y. 00:22:29.450 00000100 c2 ea 4d fd 78 04 79 8d 5a a5 04 d5 5e b9 c8 ac ..M.x.y.Z...^... 00:22:29.450 00000110 18 b2 86 a2 48 0b d5 fc b7 8d 8f 97 63 cb 81 f9 ....H.......c... 00:22:29.450 00000120 7b 6b 9d 4c bf e9 34 1c d9 89 b2 de 8c 50 3e 7d {k.L..4......P>} 00:22:29.450 00000130 01 47 0b 95 c4 c9 53 c5 d3 c9 69 a9 44 ac 33 c9 .G....S...i.D.3. 00:22:29.450 00000140 fe 0b be ca fa ef 08 e7 96 ac f3 a0 bb 07 6a 5a ..............jZ 00:22:29.450 00000150 b0 2f 03 a9 c5 e8 d7 a7 f5 c1 5e d2 80 f6 86 17 ./........^..... 00:22:29.450 00000160 15 2f 8b 11 f2 1c 3b 04 a5 d1 94 b2 2b 86 26 e7 ./....;.....+.&. 00:22:29.450 00000170 3a eb 1f 54 55 c8 8b e3 8d 2f 88 42 f7 ab 08 61 :..TU..../.B...a 00:22:29.450 00000180 4a 35 6c 32 8e 03 99 2a d4 57 84 b3 0d 4d 83 d5 J5l2...*.W...M.. 00:22:29.450 00000190 00 df 00 f5 ad 20 46 4f 70 e9 b8 d6 e7 25 bc 75 ..... FOp....%.u 00:22:29.450 000001a0 a6 ae 85 2d be 8f 4f b1 b6 e1 db 06 ef 2f e2 f0 ...-..O....../.. 00:22:29.450 000001b0 c6 45 12 f8 b1 32 38 03 5e 11 85 6d c7 35 f2 7b .E...28.^..m.5.{ 00:22:29.450 000001c0 36 27 7e ca 6d 89 e4 08 0f 1c 33 2b 15 55 f9 2c 6'~.m.....3+.U., 00:22:29.450 000001d0 09 6a 1d 42 e4 51 49 b9 60 f1 4e 5e 4f 18 d0 11 .j.B.QI.`.N^O... 00:22:29.450 000001e0 b2 2e af ee 9c c5 68 43 43 0f a3 ce ea 93 e7 15 ......hCC....... 00:22:29.451 000001f0 4f 35 bf eb 04 a4 2a ac 0c de 2b d0 d2 6f 21 34 O5....*...+..o!4 00:22:29.451 dh secret: 00:22:29.451 00000000 e2 bb 72 74 5b 26 d2 f1 d1 0e aa cd ce 0d b3 4c ..rt[&.........L 00:22:29.451 00000010 56 d7 9b d7 fe 18 f8 33 2d da b4 fc 62 f1 ba f2 V......3-...b... 00:22:29.451 00000020 d9 4a a1 01 54 1c 61 05 31 85 53 a0 e4 bf de 78 .J..T.a.1.S....x 00:22:29.451 00000030 b7 df 67 ce 13 8f 62 8a c8 8f 4e 85 57 94 10 c5 ..g...b...N.W... 00:22:29.451 00000040 fe 85 35 70 61 16 de c9 7f 71 83 59 70 46 39 40 ..5pa....q.YpF9@ 00:22:29.451 00000050 f8 27 dd 15 d0 9e 28 45 87 80 a1 3e 1e f7 ce 4b .'....(E...>...K 00:22:29.451 00000060 c1 ea 96 10 52 cd 14 69 5d 49 0b 48 fe 49 c8 f0 ....R..i]I.H.I.. 00:22:29.451 00000070 07 8b 30 f1 ab 3e a2 60 6f d1 e6 5f a9 84 93 bb ..0..>.`o.._.... 00:22:29.451 00000080 ae 68 f4 c8 26 75 01 76 2f 03 12 b8 a7 d2 c8 f6 .h..&u.v/....... 00:22:29.451 00000090 aa 75 29 6e c2 69 60 0b 4c 4f 08 d2 7f d3 d7 e5 .u)n.i`.LO...... 00:22:29.451 000000a0 45 ee 45 3d c6 76 15 59 35 8a 01 cd 91 34 a4 04 E.E=.v.Y5....4.. 00:22:29.451 000000b0 cb dd 08 04 24 c5 05 61 11 0d b0 a4 84 39 fe db ....$..a.....9.. 00:22:29.451 000000c0 c2 a2 28 06 8a 77 a6 e7 5a 7e 47 29 4d 8e 89 37 ..(..w..Z~G)M..7 00:22:29.451 000000d0 07 ac 5d d9 8d 26 48 e7 33 68 58 81 50 78 bd 24 ..]..&H.3hX.Px.$ 00:22:29.451 000000e0 17 d2 2d 98 9a c6 e4 43 d2 e9 60 80 23 40 7a b0 ..-....C..`.#@z. 00:22:29.451 000000f0 7a cc 12 71 fa b6 0a a1 01 d4 93 c3 7f 4e ad 54 z..q.........N.T 00:22:29.451 00000100 7a 8b 76 5a a9 3c 72 2b b3 68 70 27 c6 cf b3 67 z.vZ...X8.... 00:22:29.451 00000070 dc fe a7 19 d8 e5 fb 40 9b 99 c2 b7 fa 6e 3b 83 .......@.....n;. 00:22:29.451 00000080 80 df ff 2c 2a 6f 73 96 dc b7 0d 66 3b 42 45 13 ...,*os....f;BE. 00:22:29.451 00000090 e4 98 7e b4 db 2d 04 07 b2 29 4d f6 9d da 12 85 ..~..-...)M..... 00:22:29.451 000000a0 7b b0 66 fa e5 ed b4 df 81 5f 67 e6 a9 e7 93 8d {.f......_g..... 00:22:29.451 000000b0 44 94 d3 97 8a a2 45 23 67 cd a6 c6 99 34 ff 35 D.....E#g....4.5 00:22:29.451 000000c0 c3 2a 27 65 11 2b ea 4e 31 b5 1c 68 85 21 8d 21 .*'e.+.N1..h.!.! 00:22:29.451 000000d0 3e 52 d7 11 7e 6d be cc 0e 87 ab 29 f6 a9 e5 98 >R..~m.....).... 00:22:29.451 000000e0 79 cc 45 d6 aa 46 78 ef ee f2 8e df be 6e 59 28 y.E..Fx......nY( 00:22:29.451 000000f0 c6 17 c0 e1 5d 65 f0 7b ac 7a e4 4d e6 da c5 45 ....]e.{.z.M...E 00:22:29.451 00000100 93 33 fd 90 62 dc a6 ab 37 2b 00 09 3a a0 40 5c .3..b...7+..:.@\ 00:22:29.451 00000110 ab 15 96 90 4a e2 93 9f 1d 71 1c 1e c7 ab 64 7a ....J....q....dz 00:22:29.451 00000120 b6 7e 51 7e 6f 77 d0 d6 3b 38 59 16 b3 c5 35 12 .~Q~ow..;8Y...5. 00:22:29.451 00000130 97 56 fa de c8 f9 02 c9 13 bc 08 e2 f1 a5 af a8 .V.............. 00:22:29.451 00000140 3c 54 57 6d b1 53 2a c4 55 1c e5 23 e3 0f 8c 12 .0X.b.....mD.# 00:22:29.451 000000c0 f7 a1 26 be 9f 0e 81 31 48 31 4f 0a 78 5e d4 74 ..&....1H1O.x^.t 00:22:29.451 000000d0 71 c2 b3 59 e7 54 35 26 57 f3 8e d7 4d 2c f5 9f q..Y.T5&W...M,.. 00:22:29.451 000000e0 b9 5c 9f b8 bf 1e b1 d7 2a dd 77 08 9f 7f 29 11 .\......*.w...). 00:22:29.451 000000f0 3e 84 12 e5 76 03 5e 1c 9e 57 5b a7 45 1f a2 b6 >...v.^..W[.E... 00:22:29.451 00000100 f4 b5 dc 10 eb 02 3f 8b 23 55 f5 f3 85 5f 54 51 ......?.#U..._TQ 00:22:29.451 00000110 a8 d1 8a 3c 6b d3 7d d9 17 fa b0 47 9b 60 50 38 ... 00:22:29.451 00000180 d9 5f a4 4b 37 69 60 69 55 57 de e9 2f 0b f4 69 ._.K7i`iUW../..i 00:22:29.451 00000190 bc 16 17 2d d3 7c e4 5c 71 59 1d 78 54 cc 45 83 ...-.|.\qY.xT.E. 00:22:29.451 000001a0 0a d5 50 34 11 b5 27 ad 72 bc 24 d4 a7 2e be 74 ..P4..'.r.$....t 00:22:29.451 000001b0 5e 14 c5 94 1a 9e ea cb aa 74 34 e2 7c 0b 93 80 ^........t4.|... 00:22:29.451 000001c0 9d 17 f4 8e 3c fc 7b d9 a4 9d 38 64 ae 7f 4d 3b ....<.{...8d..M; 00:22:29.451 000001d0 ae 63 86 e5 63 86 d5 16 05 59 5c 6a 74 0c 06 1a .c..c....Y\jt... 00:22:29.451 000001e0 46 fd b9 6a 72 2f c6 1b c2 42 4c f2 cc 50 66 ed F..jr/...BL..Pf. 00:22:29.451 000001f0 1d 9f c1 1e e1 76 29 07 7c b6 58 9e e8 b7 96 de .....v).|.X..... 00:22:29.451 dh secret: 00:22:29.451 00000000 08 9f 6a f3 59 33 d9 9a 97 e8 00 a1 96 e3 94 85 ..j.Y3.......... 00:22:29.451 00000010 85 7f 8b 08 4e a2 40 2b 56 43 db a6 27 4f c2 3a ....N.@+VC..'O.: 00:22:29.451 00000020 4f 3b 69 db 31 bc e8 b8 1a 8f 9f 17 b7 b2 be c6 O;i.1........... 00:22:29.452 00000030 34 91 11 cf 10 89 cc 7d ee 53 64 ac 4e 2e 27 3e 4......}.Sd.N.'> 00:22:29.452 00000040 a2 3a cf 9c 67 1d 31 d8 cf 36 d5 c3 98 30 03 a2 .:..g.1..6...0.. 00:22:29.452 00000050 51 ed 40 84 4a 7e d0 d8 93 e4 f1 fb e2 f8 a8 85 Q.@.J~.......... 00:22:29.452 00000060 d8 ff f1 3d ea 7f e0 b6 ac 5b cd d9 da 90 2b a9 ...=.....[....+. 00:22:29.452 00000070 47 ea a2 95 10 41 60 2d 82 ce e1 3a f6 f9 90 52 G....A`-...:...R 00:22:29.452 00000080 18 d3 3d 14 66 93 7e 67 03 f2 29 2c d2 be d0 31 ..=.f.~g..),...1 00:22:29.452 00000090 24 bb 29 b7 bb 36 29 f2 44 3f 66 0e 73 a5 fc a2 $.)..6).D?f.s... 00:22:29.452 000000a0 c4 df 93 bb 80 ae 63 0c 2e 40 aa 75 a2 15 52 b6 ......c..@.u..R. 00:22:29.452 000000b0 42 ab e5 34 e7 28 0b 0a e6 9a f4 41 d9 43 bf a3 B..4.(.....A.C.. 00:22:29.452 000000c0 68 f7 38 c5 17 a5 60 57 54 b7 dd 97 f0 11 66 c2 h.8...`WT.....f. 00:22:29.452 000000d0 bf bc 70 53 38 ba 24 7e d8 a9 8e 1a b7 99 9a e3 ..pS8.$~........ 00:22:29.452 000000e0 63 8f 3f 9b 01 4e 82 30 fd d7 1e 18 a1 b5 0e a4 c.?..N.0........ 00:22:29.452 000000f0 c4 92 cf 9f 08 6f ff 17 24 d0 fb f4 fe 0c 47 4b .....o..$.....GK 00:22:29.452 00000100 ff 33 24 17 e7 d9 78 0d 34 08 c7 b8 11 fb 7a 6b .3$...x.4.....zk 00:22:29.452 00000110 0b f0 a4 b0 18 f8 3b 8e a7 60 c5 91 69 24 c9 cb ......;..`..i$.. 00:22:29.452 00000120 75 3f a6 1e 29 af 87 15 d9 7a a4 6c bc e0 1c 94 u?..)....z.l.... 00:22:29.452 00000130 7b 22 b0 40 ae 89 76 31 f4 03 6d c2 9f 3c 63 ec {".@..v1..m....X8.... 00:22:29.452 00000070 dc fe a7 19 d8 e5 fb 40 9b 99 c2 b7 fa 6e 3b 83 .......@.....n;. 00:22:29.452 00000080 80 df ff 2c 2a 6f 73 96 dc b7 0d 66 3b 42 45 13 ...,*os....f;BE. 00:22:29.452 00000090 e4 98 7e b4 db 2d 04 07 b2 29 4d f6 9d da 12 85 ..~..-...)M..... 00:22:29.452 000000a0 7b b0 66 fa e5 ed b4 df 81 5f 67 e6 a9 e7 93 8d {.f......_g..... 00:22:29.452 000000b0 44 94 d3 97 8a a2 45 23 67 cd a6 c6 99 34 ff 35 D.....E#g....4.5 00:22:29.452 000000c0 c3 2a 27 65 11 2b ea 4e 31 b5 1c 68 85 21 8d 21 .*'e.+.N1..h.!.! 00:22:29.452 000000d0 3e 52 d7 11 7e 6d be cc 0e 87 ab 29 f6 a9 e5 98 >R..~m.....).... 00:22:29.452 000000e0 79 cc 45 d6 aa 46 78 ef ee f2 8e df be 6e 59 28 y.E..Fx......nY( 00:22:29.452 000000f0 c6 17 c0 e1 5d 65 f0 7b ac 7a e4 4d e6 da c5 45 ....]e.{.z.M...E 00:22:29.452 00000100 93 33 fd 90 62 dc a6 ab 37 2b 00 09 3a a0 40 5c .3..b...7+..:.@\ 00:22:29.452 00000110 ab 15 96 90 4a e2 93 9f 1d 71 1c 1e c7 ab 64 7a ....J....q....dz 00:22:29.452 00000120 b6 7e 51 7e 6f 77 d0 d6 3b 38 59 16 b3 c5 35 12 .~Q~ow..;8Y...5. 00:22:29.452 00000130 97 56 fa de c8 f9 02 c9 13 bc 08 e2 f1 a5 af a8 .V.............. 00:22:29.452 00000140 3c 54 57 6d b1 53 2a c4 55 1c e5 23 e3 0f 8c 12 ....xT. 00:22:29.452 000001d0 46 3e 42 94 98 9b e2 d3 82 68 f9 7b cf 7c 6a e0 F>B......h.{.|j. 00:22:29.452 000001e0 93 9c 9a 1e 61 3c 47 d6 13 05 22 51 f3 06 d9 cb ....a..g. 00:22:29.452 00000040 1d f8 7b 57 de 0c f4 22 9f f6 e5 3d 41 87 c7 7e ..{W..."...=A..~ 00:22:29.452 00000050 2a 1d 46 cd 3c c9 d6 8f 9a 81 41 d7 49 e7 57 63 *.F.<.....A.I.Wc 00:22:29.452 00000060 3e b6 68 c2 37 57 10 e8 8a 51 a3 ef 6f df 04 fe >.h.7W...Q..o... 00:22:29.452 00000070 3b 7a 64 b8 6a 8f 63 59 37 9a 10 48 f4 e5 23 82 ;zd.j.cY7..H..#. 00:22:29.452 00000080 ca 66 6c 5f a9 86 5b c9 cc 1a 98 7d 6f 96 26 92 .fl_..[....}o.&. 00:22:29.452 00000090 ec a8 48 8c 3f 2b f9 02 86 3c df 45 46 18 a4 c3 ..H.?+...<.EF... 00:22:29.452 000000a0 ce de 35 0c 2e 4d 99 68 1e ce 59 91 b5 24 2e c4 ..5..M.h..Y..$.. 00:22:29.452 000000b0 f3 d1 e8 10 bb cc 1d 14 de c7 a9 dd f0 b0 0c 5c ...............\ 00:22:29.452 000000c0 c3 ba 25 90 d2 90 8b ba 4f df d8 fb ee 41 8f 03 ..%.....O....A.. 00:22:29.452 000000d0 71 a3 b2 f2 9d 49 24 cf f0 0c d1 e1 22 39 2d 4a q....I$....."9-J 00:22:29.452 000000e0 32 78 a8 e9 07 cc 12 6e a1 02 98 6f 28 1a dd ea 2x.....n...o(... 00:22:29.452 000000f0 61 9b 8b a9 fd 5f ef 49 7e 66 29 e8 45 2f a9 3a a...._.I~f).E/.: 00:22:29.452 00000100 b0 4b 36 27 07 c3 db 07 0c c5 ae 6b be d3 93 be .K6'.......k.... 00:22:29.452 00000110 56 87 f3 1c e8 91 96 65 5b b0 68 a3 2c 45 60 fb V......e[.h.,E`. 00:22:29.452 00000120 30 25 08 34 52 9b 2e 43 df 7e 11 f3 30 9e 19 c3 0%.4R..C.~..0... 00:22:29.452 00000130 e1 84 81 49 29 1b ca 33 cc 42 98 b2 2b bc bb 11 ...I)..3.B..+... 00:22:29.452 00000140 c9 3f 5a d8 5d 0d d6 ad 2c 8f 36 2a ca 8b 8f 73 .?Z.]...,.6*...s 00:22:29.452 00000150 c0 67 e0 e8 76 c6 50 41 94 11 f4 72 f3 56 fd 12 .g..v.PA...r.V.. 00:22:29.452 00000160 7f 4c ac 2e a4 0a 7d 56 eb 4e c0 21 8f 32 ca fb .L....}V.N.!.2.. 00:22:29.453 00000170 0e 6c b2 74 6f c3 9d e0 da 06 9f 7c 7c a5 e9 87 .l.to......||... 00:22:29.453 00000180 1d 62 51 ea a1 07 ce 3c c8 ed 9c 3d 2f 75 3f e2 .bQ....<...=/u?. 00:22:29.453 00000190 f7 b0 85 ac 30 2b 93 a2 95 f0 ed 44 85 dd 49 68 ....0+.....D..Ih 00:22:29.453 000001a0 9c cd df bd db 79 6d b0 52 e3 9d 23 b3 64 d0 be .....ym.R..#.d.. 00:22:29.453 000001b0 d6 2b 48 ff dd 51 ab 90 e0 18 93 d1 0e e5 15 d3 .+H..Q.......... 00:22:29.453 000001c0 cb a8 d8 29 58 c0 21 c5 4b c7 f6 6e 1b 95 70 c0 ...)X.!.K..n..p. 00:22:29.453 000001d0 76 10 60 ad 6a 5e 60 a1 42 58 f5 03 67 56 f0 07 v.`.j^`.BX..gV.. 00:22:29.453 000001e0 c4 2d ea a8 6c db 58 c3 31 73 fc 30 a0 7e 7b e3 .-..l.X.1s.0.~{. 00:22:29.453 000001f0 c2 da b5 da d0 18 f4 c5 5f 9c eb 91 77 04 b7 fa ........_...w... 00:22:29.453 [2024-09-27 13:27:04.312007] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key3, hash=1, dhgroup=3, seq=3775755200, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.453 [2024-09-27 13:27:04.312444] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.453 [2024-09-27 13:27:04.335473] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.453 [2024-09-27 13:27:04.336013] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.453 [2024-09-27 13:27:04.336302] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.453 [2024-09-27 13:27:04.336738] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.453 [2024-09-27 13:27:04.444090] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.453 [2024-09-27 13:27:04.444401] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.453 [2024-09-27 13:27:04.444630] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 3 (ffdhe4096) 00:22:29.453 [2024-09-27 13:27:04.444941] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.453 [2024-09-27 13:27:04.445174] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.453 ctrlr pubkey: 00:22:29.453 00000000 76 27 7a 4e 62 31 94 69 b8 01 6d c0 48 5d 79 b9 v'zNb1.i..m.H]y. 00:22:29.453 00000010 07 71 79 bb 79 0d ad d9 03 43 fc 04 10 c8 12 ca .qy.y....C...... 00:22:29.453 00000020 20 9d c3 c7 03 a2 d4 d1 4f a4 e8 06 5f e9 9c 6e .......O..._..n 00:22:29.453 00000030 d2 01 d4 f1 45 19 af 16 9e 87 dc e1 a3 4e 89 39 ....E........N.9 00:22:29.453 00000040 fe 38 e3 9a 19 a5 b2 f4 2d f5 0b 48 d4 80 03 8e .8......-..H.... 00:22:29.453 00000050 cb 17 f1 33 38 bb 22 0c 64 43 48 07 de f1 38 4e ...38.".dCH...8N 00:22:29.453 00000060 37 bc ad 1f 74 86 89 f0 cb 5e f0 c2 e9 04 d5 32 7...t....^.....2 00:22:29.453 00000070 02 5b f5 e3 53 ce 17 41 3d 27 f6 cd cb f6 67 c0 .[..S..A='....g. 00:22:29.453 00000080 7c 87 d3 91 22 f7 26 dd a3 79 c7 be b2 40 c5 86 |...".&..y...@.. 00:22:29.453 00000090 a8 e4 c4 0e ea c2 72 e9 c4 c8 91 d7 50 61 03 46 ......r.....Pa.F 00:22:29.453 000000a0 59 ce 6f e7 54 e7 be c4 42 10 8a 46 d5 99 5f 9a Y.o.T...B..F.._. 00:22:29.453 000000b0 db b9 1e ea be bc 35 29 27 26 2d 96 49 13 09 e9 ......5)'&-.I... 00:22:29.453 000000c0 1d 00 bc 0c d6 c4 f4 e2 24 dc 4d ee dd 3b 8f dd ........$.M..;.. 00:22:29.453 000000d0 26 3e e8 8e a0 5c 81 5d 74 ce 82 f6 fb 9b b1 78 &>...\.]t......x 00:22:29.453 000000e0 e9 5f 51 b6 ef 5a 72 ef 4a ab 27 9d b0 3a 7b cd ._Q..Zr.J.'..:{. 00:22:29.453 000000f0 fc 11 bc cf a0 1c f0 08 23 8c 96 d8 72 76 38 45 ........#...rv8E 00:22:29.453 00000100 71 07 d0 14 65 61 3e 07 ef 22 9a 82 65 23 16 ca q...ea>.."..e#.. 00:22:29.453 00000110 d2 78 a3 3e ae c6 8b 5b cb e3 f9 99 f0 2b 91 9c .x.>...[.....+.. 00:22:29.453 00000120 4c 80 1c ab 34 4c 58 3a c5 5a 99 19 49 96 53 8d L...4LX:.Z..I.S. 00:22:29.453 00000130 0b 65 bc 3b a0 e0 51 11 ea 1b 28 dd 51 1c 18 90 .e.;..Q...(.Q... 00:22:29.453 00000140 68 1e d8 a0 84 75 21 67 b6 aa e0 69 a0 ae d9 b9 h....u!g...i.... 00:22:29.453 00000150 dc 4b 80 1a 8e 38 c0 97 ad e2 b0 e5 63 a6 bd 8d .K...8......c... 00:22:29.453 00000160 46 73 5a 01 86 71 dc 77 a5 31 21 72 5c 64 83 a8 FsZ..q.w.1!r\d.. 00:22:29.453 00000170 5f 19 3b f2 80 1c a4 4d 56 3b 96 05 9b 9b 46 4a _.;....MV;....FJ 00:22:29.453 00000180 7e 01 f6 cb 7e 27 56 9b 5b f3 b5 00 55 85 6e f6 ~...~'V.[...U.n. 00:22:29.453 00000190 4d c6 b3 26 37 9c d8 df 30 2b b8 8e 0e c3 55 6d M..&7...0+....Um 00:22:29.453 000001a0 40 67 6e 32 b0 2d 83 11 d0 b2 63 c1 c2 c3 51 87 @gn2.-....c...Q. 00:22:29.453 000001b0 19 0e 71 65 48 39 dc 08 31 a8 fe 7e 52 b5 00 ba ..qeH9..1..~R... 00:22:29.453 000001c0 61 ea 3b 9a dc 40 a7 19 f1 82 f7 69 b3 91 33 27 a.;..@.....i..3' 00:22:29.453 000001d0 65 99 22 04 27 41 04 4a 7b 4f ae 48 85 d6 b5 77 e.".'A.J{O.H...w 00:22:29.453 000001e0 e8 4f 63 e9 3e 52 e6 41 b2 9a b8 80 64 2b 97 bd .Oc.>R.A....d+.. 00:22:29.453 000001f0 cb cb f1 f2 77 56 a1 b2 d6 df bc 79 94 cc 5f f5 ....wV.....y.._. 00:22:29.453 host pubkey: 00:22:29.453 00000000 1d af 5b 2e 43 dd ee 3a 6d 73 7a 35 52 59 95 92 ..[.C..:msz5RY.. 00:22:29.453 00000010 c3 fb c1 fe c4 9d 85 01 01 c7 2e 06 5e b1 32 41 ............^.2A 00:22:29.453 00000020 7c 27 b1 6d 30 60 be f7 78 a3 22 b9 56 30 40 ed |'.m0`..x.".V0@. 00:22:29.453 00000030 44 7d 5c 2a 57 29 ce 88 af 9e 4a 24 e4 17 4d ba D}\*W)....J$..M. 00:22:29.453 00000040 c4 25 71 48 e4 fb 45 58 90 f6 52 46 a0 6a 72 d1 .%qH..EX..RF.jr. 00:22:29.453 00000050 f6 2b 0a 36 66 b1 41 54 00 c6 ba 74 77 94 64 ab .+.6f.AT...tw.d. 00:22:29.453 00000060 98 c1 b4 bc 2d ed 56 6f fe 84 72 6a 58 c6 cb ff ....-.Vo..rjX... 00:22:29.453 00000070 80 78 3f 7f 94 f0 83 74 1a 20 0f 7b 73 15 ef c6 .x?....t. .{s... 00:22:29.453 00000080 2a 8d 12 3a 0c 76 0b 5f d1 17 b5 03 e4 f5 d2 b6 *..:.v._........ 00:22:29.453 00000090 53 1b 3c 70 b6 61 29 3f 2b f6 15 f9 72 cf a4 16 S...+g.G..'. 00:22:29.453 000000b0 de 6d d1 70 5f ae f5 4e a5 6b a6 f1 1d 62 e1 17 .m.p_..N.k...b.. 00:22:29.453 000000c0 4c ae 48 21 65 c3 7b 82 d3 a2 af 6b 73 98 c4 fa L.H!e.{....ks... 00:22:29.453 000000d0 d0 93 19 c6 b5 31 8b b4 39 e9 dc 61 94 7f a3 4f .....1..9..a...O 00:22:29.453 000000e0 11 10 ee 29 2d ef c0 2f cc 2a 01 53 5d 21 37 4c ...)-../.*.S]!7L 00:22:29.453 000000f0 1e c6 4b c4 54 69 56 5b 1c 8e 83 4b 4d 0c 8f c0 ..K.TiV[...KM... 00:22:29.453 00000100 85 8f fb 43 7a 72 eb f7 67 9a bf f2 71 08 83 f0 ...Czr..g...q... 00:22:29.453 00000110 b1 dd a8 c8 a2 fe 57 a7 a5 04 cb d5 76 7b 65 de ......W.....v{e. 00:22:29.453 00000120 c0 02 09 a7 16 58 f7 18 a6 75 46 28 7d b5 47 02 .....X...uF(}.G. 00:22:29.453 00000130 7a 55 67 55 8d 24 30 87 c6 f3 b4 aa 4b 9d 2d f7 zUgU.$0.....K.-. 00:22:29.453 00000140 0c 18 f3 1e 6d 02 d5 fc 5c 50 56 75 5a e1 dd 83 ....m...\PVuZ... 00:22:29.453 00000150 d3 0d d5 bd 89 ff bf 68 52 5b 22 f7 d4 fe e6 77 .......hR["....w 00:22:29.453 00000160 22 5d 1a 2b 8f 07 07 7f 25 11 1f 12 e5 02 1a 97 "].+....%....... 00:22:29.453 00000170 3e 00 f1 11 4b 18 39 40 4e f5 55 6d 8b 25 68 f4 >...K.9@N.Um.%h. 00:22:29.453 00000180 03 63 27 ee 2e 78 c0 61 0f e1 02 a9 0a fd 57 0f .c'..x.a......W. 00:22:29.453 00000190 2f 7a 43 72 2f 13 4a 0d 68 87 cf 23 bd 01 00 8f /zCr/.J.h..#.... 00:22:29.453 000001a0 03 fb 6b e9 43 8a 72 dc da 24 aa 1c c3 cc 95 91 ..k.C.r..$...... 00:22:29.453 000001b0 db 4a a5 22 b1 b7 30 e5 eb 77 77 96 fb 93 a9 d3 .J."..0..ww..... 00:22:29.453 000001c0 9c 80 ec 5b 8a b6 2c 53 73 c7 28 a6 58 ce 7a 2c ...[..,Ss.(.X.z, 00:22:29.453 000001d0 4f f2 93 ff bb f9 f3 ed 0f 6e 18 33 94 a9 56 2a O........n.3..V* 00:22:29.453 000001e0 5c 0a 99 43 35 eb ca ca 41 81 e7 58 47 fb fd 01 \..C5...A..XG... 00:22:29.453 000001f0 77 05 b3 02 05 d7 27 6e f9 0d c5 a5 94 54 02 73 w.....'n.....T.s 00:22:29.453 dh secret: 00:22:29.453 00000000 57 71 98 0d bb 3b fb 56 a9 c1 e9 0d 56 c9 a5 67 Wq...;.V....V..g 00:22:29.453 00000010 9e b1 7f eb f9 7d 8c a0 41 6d 06 ef 2d dc a0 5c .....}..Am..-..\ 00:22:29.453 00000020 99 c1 ee 88 94 3e 6c 60 bc 0c e0 63 36 0a 0a 3c .....>l`...c6..< 00:22:29.453 00000030 4d df 45 2a c7 62 b2 5f 5f e2 61 5d 6d 0f d6 29 M.E*.b.__.a]m..) 00:22:29.453 00000040 a2 c9 8e 40 71 8b 16 19 af 44 cc c2 0f 43 cd 8c ...@q....D...C.. 00:22:29.453 00000050 2b 28 ff 9e 74 92 cd 69 31 56 18 eb e5 af ed 54 +(..t..i1V.....T 00:22:29.453 00000060 df 4b 2b 27 0f 75 7e a3 25 4e a4 61 4f 71 04 76 .K+'.u~.%N.aOq.v 00:22:29.453 00000070 c0 ae 49 aa c9 50 eb e1 2d 55 aa 23 c5 b4 f0 04 ..I..P..-U.#.... 00:22:29.453 00000080 c5 65 04 fb eb 19 2c 96 c3 11 2b ea 06 fb b0 ab .e....,...+..... 00:22:29.453 00000090 4c 0e 28 21 ef a2 2c 2c 52 1e ea dd 6b cb ba 62 L.(!..,,R...k..b 00:22:29.453 000000a0 f2 d0 eb 12 b5 9b bd c8 b1 ba 6b d5 e6 09 c5 38 ..........k....8 00:22:29.453 000000b0 17 49 e6 f9 9e 63 49 1e 29 31 f7 5b 88 ce ee 58 .I...cI.)1.[...X 00:22:29.453 000000c0 68 17 0d 9b b9 90 1e 36 f7 7e 8a 4c 18 e5 eb c8 h......6.~.L.... 00:22:29.453 000000d0 75 a5 d5 9a ec f3 6a 9b fd db fd 87 11 97 49 03 u.....j.......I. 00:22:29.453 000000e0 29 94 f6 70 00 21 9c 80 8c 6e 11 7d 9a 67 39 10 )..p.!...n.}.g9. 00:22:29.453 000000f0 16 bc 15 d8 31 17 5e 2d 7c fc ed 7d 8e e6 b9 96 ....1.^-|..}.... 00:22:29.453 00000100 8c b2 23 25 a8 96 54 40 04 51 a3 ae 06 10 64 cb ..#%..T@.Q....d. 00:22:29.453 00000110 16 32 90 ca 1b c2 2a 21 84 ed d5 c6 54 98 4e 71 .2....*!....T.Nq 00:22:29.453 00000120 1b 61 cb e4 0f a8 1a 4b c6 b0 55 59 bb 24 cc b2 .a.....K..UY.$.. 00:22:29.453 00000130 b2 37 46 9c 01 cf 69 6c b1 58 73 3a 06 e7 50 7d .7F...il.Xs:..P} 00:22:29.453 00000140 8a 4a ed f1 ff ac 08 07 0c b8 22 e3 e6 cd 23 fd .J........"...#. 00:22:29.453 00000150 a0 07 fe 44 a6 8f 2e 8b e1 68 f8 d0 c3 8e 87 d3 ...D.....h...... 00:22:29.453 00000160 bf 67 ba d8 86 f8 d9 5b 0e 68 bc e8 8a 64 1b ea .g.....[.h...d.. 00:22:29.453 00000170 57 4c 3d ac 46 55 7b 80 63 d8 b4 75 6c 8d 5e 18 WL=.FU{.c..ul.^. 00:22:29.453 00000180 c6 49 42 ea 20 f2 51 33 9f 72 5e db 2b 26 91 cc .IB. .Q3.r^.+&.. 00:22:29.453 00000190 10 c1 86 96 0d 3f 96 b5 02 40 16 10 ea 51 da 35 .....?...@...Q.5 00:22:29.453 000001a0 79 25 71 95 7d 16 31 47 31 aa 33 a2 2b 48 ac 7e y%q.}.1G1.3.+H.~ 00:22:29.453 000001b0 6c e9 3a 63 3d bc f9 c8 82 ad 5c cf d1 4d e8 6e l.:c=.....\..M.n 00:22:29.453 000001c0 ff 45 f6 66 b0 2a ca 45 9c 8c 69 66 ee 54 93 e3 .E.f.*.E..if.T.. 00:22:29.453 000001d0 33 e1 4f 43 35 03 ab 96 3a 58 df 53 3a ba ae e5 3.OC5...:X.S:... 00:22:29.453 000001e0 0c 80 80 5d ff ac 56 95 23 93 d1 e0 6f 33 b1 b8 ...]..V.#...o3.. 00:22:29.453 000001f0 62 f5 d9 0d 6d 1f c6 b1 b0 2b a8 74 79 84 9a dc b...m....+.ty... 00:22:29.453 [2024-09-27 13:27:04.474253] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key4, hash=1, dhgroup=3, seq=3775755201, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.453 [2024-09-27 13:27:04.474550] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.453 [2024-09-27 13:27:04.497537] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.453 [2024-09-27 13:27:04.497942] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.453 [2024-09-27 13:27:04.498121] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.453 [2024-09-27 13:27:04.549871] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.453 [2024-09-27 13:27:04.550092] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.454 [2024-09-27 13:27:04.550413] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 3 (ffdhe4096) 00:22:29.454 [2024-09-27 13:27:04.550728] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.454 [2024-09-27 13:27:04.550976] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.454 ctrlr pubkey: 00:22:29.454 00000000 76 27 7a 4e 62 31 94 69 b8 01 6d c0 48 5d 79 b9 v'zNb1.i..m.H]y. 00:22:29.454 00000010 07 71 79 bb 79 0d ad d9 03 43 fc 04 10 c8 12 ca .qy.y....C...... 00:22:29.454 00000020 20 9d c3 c7 03 a2 d4 d1 4f a4 e8 06 5f e9 9c 6e .......O..._..n 00:22:29.454 00000030 d2 01 d4 f1 45 19 af 16 9e 87 dc e1 a3 4e 89 39 ....E........N.9 00:22:29.454 00000040 fe 38 e3 9a 19 a5 b2 f4 2d f5 0b 48 d4 80 03 8e .8......-..H.... 00:22:29.454 00000050 cb 17 f1 33 38 bb 22 0c 64 43 48 07 de f1 38 4e ...38.".dCH...8N 00:22:29.454 00000060 37 bc ad 1f 74 86 89 f0 cb 5e f0 c2 e9 04 d5 32 7...t....^.....2 00:22:29.454 00000070 02 5b f5 e3 53 ce 17 41 3d 27 f6 cd cb f6 67 c0 .[..S..A='....g. 00:22:29.454 00000080 7c 87 d3 91 22 f7 26 dd a3 79 c7 be b2 40 c5 86 |...".&..y...@.. 00:22:29.454 00000090 a8 e4 c4 0e ea c2 72 e9 c4 c8 91 d7 50 61 03 46 ......r.....Pa.F 00:22:29.454 000000a0 59 ce 6f e7 54 e7 be c4 42 10 8a 46 d5 99 5f 9a Y.o.T...B..F.._. 00:22:29.454 000000b0 db b9 1e ea be bc 35 29 27 26 2d 96 49 13 09 e9 ......5)'&-.I... 00:22:29.454 000000c0 1d 00 bc 0c d6 c4 f4 e2 24 dc 4d ee dd 3b 8f dd ........$.M..;.. 00:22:29.454 000000d0 26 3e e8 8e a0 5c 81 5d 74 ce 82 f6 fb 9b b1 78 &>...\.]t......x 00:22:29.454 000000e0 e9 5f 51 b6 ef 5a 72 ef 4a ab 27 9d b0 3a 7b cd ._Q..Zr.J.'..:{. 00:22:29.454 000000f0 fc 11 bc cf a0 1c f0 08 23 8c 96 d8 72 76 38 45 ........#...rv8E 00:22:29.454 00000100 71 07 d0 14 65 61 3e 07 ef 22 9a 82 65 23 16 ca q...ea>.."..e#.. 00:22:29.454 00000110 d2 78 a3 3e ae c6 8b 5b cb e3 f9 99 f0 2b 91 9c .x.>...[.....+.. 00:22:29.454 00000120 4c 80 1c ab 34 4c 58 3a c5 5a 99 19 49 96 53 8d L...4LX:.Z..I.S. 00:22:29.454 00000130 0b 65 bc 3b a0 e0 51 11 ea 1b 28 dd 51 1c 18 90 .e.;..Q...(.Q... 00:22:29.454 00000140 68 1e d8 a0 84 75 21 67 b6 aa e0 69 a0 ae d9 b9 h....u!g...i.... 00:22:29.454 00000150 dc 4b 80 1a 8e 38 c0 97 ad e2 b0 e5 63 a6 bd 8d .K...8......c... 00:22:29.454 00000160 46 73 5a 01 86 71 dc 77 a5 31 21 72 5c 64 83 a8 FsZ..q.w.1!r\d.. 00:22:29.454 00000170 5f 19 3b f2 80 1c a4 4d 56 3b 96 05 9b 9b 46 4a _.;....MV;....FJ 00:22:29.454 00000180 7e 01 f6 cb 7e 27 56 9b 5b f3 b5 00 55 85 6e f6 ~...~'V.[...U.n. 00:22:29.454 00000190 4d c6 b3 26 37 9c d8 df 30 2b b8 8e 0e c3 55 6d M..&7...0+....Um 00:22:29.454 000001a0 40 67 6e 32 b0 2d 83 11 d0 b2 63 c1 c2 c3 51 87 @gn2.-....c...Q. 00:22:29.454 000001b0 19 0e 71 65 48 39 dc 08 31 a8 fe 7e 52 b5 00 ba ..qeH9..1..~R... 00:22:29.454 000001c0 61 ea 3b 9a dc 40 a7 19 f1 82 f7 69 b3 91 33 27 a.;..@.....i..3' 00:22:29.454 000001d0 65 99 22 04 27 41 04 4a 7b 4f ae 48 85 d6 b5 77 e.".'A.J{O.H...w 00:22:29.454 000001e0 e8 4f 63 e9 3e 52 e6 41 b2 9a b8 80 64 2b 97 bd .Oc.>R.A....d+.. 00:22:29.454 000001f0 cb cb f1 f2 77 56 a1 b2 d6 df bc 79 94 cc 5f f5 ....wV.....y.._. 00:22:29.454 host pubkey: 00:22:29.454 00000000 fe a6 64 8c 31 99 b0 90 69 44 9e 5a 01 4e d2 c1 ..d.1...iD.Z.N.. 00:22:29.454 00000010 ef d4 59 9f 50 db 6b fa 37 ba fb 3b f5 44 99 24 ..Y.P.k.7..;.D.$ 00:22:29.454 00000020 d1 8a d7 14 03 11 85 58 43 cf cd aa 9c bb 89 d4 .......XC....... 00:22:29.454 00000030 dd a2 68 2b 5c 57 39 2b bc 12 a1 47 3b e7 d3 c7 ..h+\W9+...G;... 00:22:29.454 00000040 cf 8b ee ca bb 77 5e f7 c0 3c fc 97 14 88 26 8a .....w^..<....&. 00:22:29.454 00000050 28 98 96 a5 2a f8 1d 40 ca f5 b2 aa 0a 95 37 03 (...*..@......7. 00:22:29.454 00000060 64 cd ba 28 10 58 02 5e 96 90 7a c6 19 9f b5 d1 d..(.X.^..z..... 00:22:29.454 00000070 68 de 14 9d 9f a3 2f e1 f7 85 99 ab 1a 2b b0 d5 h...../......+.. 00:22:29.454 00000080 f7 fb 4f ac 51 2c c1 eb b2 b4 06 37 c1 93 5d 8c ..O.Q,.....7..]. 00:22:29.454 00000090 c1 d1 d5 04 db fa d7 b0 62 e3 53 7d 2c f5 92 30 ........b.S},..0 00:22:29.454 000000a0 23 ca 6b a7 0f 3a e2 31 ef 7b 34 cd b4 b4 20 8b #.k..:.1.{4... . 00:22:29.454 000000b0 3f 0e f3 be 61 66 ed 6d aa fa 09 3f cc a3 b9 e0 ?...af.m...?.... 00:22:29.454 000000c0 f8 49 a7 14 49 bb 0a a8 52 15 00 33 52 11 d1 8e .I..I...R..3R... 00:22:29.454 000000d0 56 09 8f e6 c4 6f ec 52 ff 8b 1c e0 66 26 5d f3 V....o.R....f&]. 00:22:29.454 000000e0 63 fd 28 5e 41 da 43 ab f8 9d 3a da b1 7a 99 23 c.(^A.C...:..z.# 00:22:29.454 000000f0 b3 4f f0 40 71 51 90 d5 21 33 97 2d 0f 57 27 95 .O.@qQ..!3.-.W'. 00:22:29.454 00000100 b8 f2 9d 41 fd 71 29 70 e8 06 a0 78 a4 27 45 8b ...A.q)p...x.'E. 00:22:29.454 00000110 03 7a 34 1a a9 ba 26 e6 e9 56 77 e9 4a a1 94 ee .z4...&..Vw.J... 00:22:29.454 00000120 5c f4 6d 95 58 3c ce 81 34 bc 1c e0 d8 27 d7 6f \.m.X<..4....'.o 00:22:29.454 00000130 0b af cd d7 d3 46 c2 90 5e af ec b6 ee 37 0f 3c .....F..^....7.< 00:22:29.454 00000140 18 1c 60 00 49 7e 21 35 65 bd 75 3d 27 13 da 10 ..`.I~!5e.u='... 00:22:29.454 00000150 e9 1a dd 83 3d 30 4a 77 42 91 fd e9 6e ed 28 db ....=0JwB...n.(. 00:22:29.454 00000160 7f 37 5f c1 c3 0e f1 e4 68 62 a9 c8 c1 82 b4 65 .7_.....hb.....e 00:22:29.454 00000170 9d 92 a7 9b ef 62 e0 6e 27 69 57 b8 84 a0 19 35 .....b.n'iW....5 00:22:29.454 00000180 27 bd fc d1 f2 50 0a f9 ff 7c c9 50 bd 3c de ef '....P...|.P.<.. 00:22:29.454 00000190 c3 6a c7 ce 22 f7 21 27 13 6b 1d 6b 56 a0 d2 9d .j..".!'.k.kV... 00:22:29.454 000001a0 39 06 21 78 6a ca 90 43 05 d6 79 6e 56 6e 5b c3 9.!xj..C..ynVn[. 00:22:29.454 000001b0 e4 0b 4b 04 c6 79 fe 53 59 d4 fe de ac 8b 8f 8d ..K..y.SY....... 00:22:29.454 000001c0 f6 a6 f9 e3 b8 9f 38 1e 25 06 57 5b 6b 28 eb f2 ......8.%.W[k(.. 00:22:29.454 000001d0 69 cc a2 86 71 ab a7 4d 7d 97 e3 ce 79 3a e6 1e i...q..M}...y:.. 00:22:29.454 000001e0 47 7c 39 d0 2f 89 93 80 10 ce b0 96 c5 cd a5 d5 G|9./........... 00:22:29.454 000001f0 5c e8 70 f6 45 d4 a3 bf ca f3 23 83 1f 4f ae 11 \.p.E.....#..O.. 00:22:29.454 dh secret: 00:22:29.454 00000000 08 ba ab 65 ba 9d 92 95 b8 99 ed ab 56 4f f8 d0 ...e........VO.. 00:22:29.454 00000010 80 9d 47 93 09 7c 2e e8 be 5e 0f 01 a6 6a cf 28 ..G..|...^...j.( 00:22:29.454 00000020 b2 1f 20 9d e0 1c 1f d2 ee 74 d4 dc e4 d7 ad 68 .. ......t.....h 00:22:29.454 00000030 02 ba 67 4c 83 f5 5a 86 d9 36 38 48 48 e9 75 45 ..gL..Z..68HH.uE 00:22:29.454 00000040 d5 2e e1 0f 22 50 d5 9e f2 00 f7 b1 c3 80 24 2c ...."P........$, 00:22:29.454 00000050 1e 14 e8 4b 4d 27 8d 6f eb 2f 78 94 1e 97 0b ac ...KM'.o./x..... 00:22:29.454 00000060 69 fd 62 c2 ce 7e 70 08 66 d3 91 71 e4 ae f0 b9 i.b..~p.f..q.... 00:22:29.454 00000070 44 2f b0 28 40 d8 58 ae 15 14 0d 0c de 0d 36 40 D/.(@.X.......6@ 00:22:29.454 00000080 4c bf 47 d2 eb 11 0c 65 df 75 42 b4 8f 58 bc 2c L.G....e.uB..X., 00:22:29.454 00000090 c4 b1 35 9b 1c 2c 9b aa 70 a2 31 e5 db 2b c3 ae ..5..,..p.1..+.. 00:22:29.454 000000a0 73 e7 30 21 84 2d 0d c4 65 d0 91 c2 09 1a 83 7b s.0!.-..e......{ 00:22:29.454 000000b0 56 c1 9f b4 89 f8 4f 37 ba e1 e2 66 c6 41 50 f5 V.....O7...f.AP. 00:22:29.454 000000c0 7f 14 b4 00 95 71 bc 5d bc b1 a4 fc 84 a6 ce 34 .....q.].......4 00:22:29.454 000000d0 52 fd c8 f0 06 3a e3 1e 00 81 41 af e8 e3 1d ff R....:....A..... 00:22:29.454 000000e0 a9 b8 ef 75 83 bf de c5 ad 7d 61 bc db 51 3d a1 ...u.....}a..Q=. 00:22:29.454 000000f0 4c fb e2 17 4b 58 e4 9b 76 6a 89 55 c0 08 bf 3f L...KX..vj.U...? 00:22:29.454 00000100 93 f1 df 9f 33 26 39 4f a7 9c 27 fa fa 18 c2 fa ....3&9O..'..... 00:22:29.454 00000110 ff a9 e3 f7 66 b8 27 38 4e 27 14 00 ee 35 24 4b ....f.'8N'...5$K 00:22:29.454 00000120 a6 57 16 07 25 c6 69 a8 cc 52 e0 35 6f 5e 89 f2 .W..%.i..R.5o^.. 00:22:29.454 00000130 5f d6 09 a1 95 e2 f6 36 ab 21 7d c0 6e 48 de 8b _......6.!}.nH.. 00:22:29.454 00000140 67 63 00 af 76 20 e0 b2 e0 31 c1 f4 1a 70 31 dd gc..v ...1...p1. 00:22:29.454 00000150 2f 77 c2 3b 37 67 48 d1 64 8d 90 91 c0 3a e8 5e /w.;7gH.d....:.^ 00:22:29.454 00000160 a1 78 7b d5 ab 66 90 4c e3 6f 9f aa 8e 3e d2 3e .x{..f.L.o...>.> 00:22:29.454 00000170 f8 82 e9 02 83 14 7a 48 b8 aa d9 d1 16 e4 46 a5 ......zH......F. 00:22:29.454 00000180 06 2d 37 95 bf 4b 56 36 5c 40 d6 d8 e2 aa 11 e0 .-7..KV6\@...... 00:22:29.454 00000190 ae 26 a5 32 08 39 35 bc 10 bf 70 3d a8 5e 11 ef .&.2.95...p=.^.. 00:22:29.454 000001a0 cf 38 fb b1 f9 d6 c5 c8 40 f5 d8 cb 00 ed b7 3f .8......@......? 00:22:29.454 000001b0 2d 0f a9 d1 19 e9 8a 65 2b 97 c3 01 d3 11 ad 6e -......e+......n 00:22:29.454 000001c0 a2 c9 94 8c a4 6a 21 06 08 ad a3 8b 82 bb dc 9a .....j!......... 00:22:29.454 000001d0 fb b1 5b 4d e8 27 9b 62 89 52 97 6a 24 c3 04 1f ..[M.'.b.R.j$... 00:22:29.454 000001e0 82 8f 6c c8 34 8e dd c6 85 26 0d e6 07 9e f1 aa ..l.4....&...... 00:22:29.454 000001f0 7c 0f 93 df 9c 31 27 2c 93 ab fc 0f 97 76 5e 84 |....1',.....v^. 00:22:29.454 [2024-09-27 13:27:04.580223] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key4, hash=1, dhgroup=3, seq=3775755202, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.454 [2024-09-27 13:27:04.580603] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.454 [2024-09-27 13:27:04.604001] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.454 [2024-09-27 13:27:04.604499] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.454 [2024-09-27 13:27:04.604769] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.454 [2024-09-27 13:27:06.512636] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.454 [2024-09-27 13:27:06.512986] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.454 [2024-09-27 13:27:06.513167] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.454 [2024-09-27 13:27:06.513309] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.454 [2024-09-27 13:27:06.513524] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.454 ctrlr pubkey: 00:22:29.454 00000000 3b 66 ec 7b 12 8f ea 9c da d8 1e 53 16 5d 7f c0 ;f.{.......S.].. 00:22:29.454 00000010 54 c2 df f2 17 b6 0b 6c dc 1c fe f5 98 ba e5 d5 T......l........ 00:22:29.454 00000020 c5 6e 25 48 fd 2e de eb 50 9a cf 50 b2 56 28 65 .n%H....P..P.V(e 00:22:29.454 00000030 83 db 2a 1b 59 d6 7e 1d f1 e6 ff 16 bd c7 a2 de ..*.Y.~......... 00:22:29.454 00000040 c6 7d be 69 f7 32 e4 a0 2a 90 63 8b a0 04 56 f8 .}.i.2..*.c...V. 00:22:29.454 00000050 30 82 86 1c 11 72 00 f9 84 5c 62 5f 06 12 3a 8c 0....r...\b_..:. 00:22:29.454 00000060 ea a7 d3 c4 2a 0a a1 a7 f0 66 74 0c db dc b8 a0 ....*....ft..... 00:22:29.454 00000070 2a 2e d9 8a a4 40 88 e7 15 a0 95 83 4c ce 54 3f *....@......L.T? 00:22:29.454 00000080 7c 49 cb ea a9 0a d9 6a d3 a9 9e 01 89 1f 92 f4 |I.....j........ 00:22:29.454 00000090 e4 4a 4a ff 55 3d ef 52 5e d3 5d df 33 5d 9d 73 .JJ.U=.R^.].3].s 00:22:29.454 000000a0 02 b8 3e b2 c8 e1 7c 24 6d aa db 5b 42 05 a5 ba ..>...|$m..[B... 00:22:29.454 000000b0 e1 41 da 2b 44 f7 da 96 94 35 c5 cd 2f fd 65 b7 .A.+D....5../.e. 00:22:29.454 000000c0 e9 bc d2 00 58 3f f5 c4 26 f5 93 24 94 14 3c 3a ....X?..&..$..<: 00:22:29.454 000000d0 01 09 63 b0 34 f8 49 99 ae c1 8e f1 f3 f7 26 16 ..c.4.I.......&. 00:22:29.454 000000e0 15 8a aa 93 c5 ff 96 58 1b b1 c8 25 35 32 ac 6f .......X...%52.o 00:22:29.454 000000f0 d8 c2 09 4d 17 43 df e3 09 87 e1 48 55 a5 08 35 ...M.C.....HU..5 00:22:29.454 00000100 2e 2b d9 4b ae 95 c7 56 30 10 69 93 ab ff 23 90 .+.K...V0.i...#. 00:22:29.454 00000110 bc 72 a9 04 77 b1 ca f8 e2 5b a1 7c 65 87 05 42 .r..w....[.|e..B 00:22:29.454 00000120 6c 0e f2 96 f2 0d 75 c4 94 bb 8c ca 2e 73 f8 77 l.....u......s.w 00:22:29.454 00000130 f7 9d bd 52 2f 0b bf 54 d5 d1 37 17 7a 94 3c 81 ...R/..T..7.z.<. 00:22:29.455 00000140 be 91 ba 65 e0 36 fe d3 e1 8f 07 aa ac 3f ad 8c ...e.6.......?.. 00:22:29.455 00000150 61 2a 1e 64 a6 23 dd 75 85 a1 bb 02 08 a0 cd 67 a*.d.#.u.......g 00:22:29.455 00000160 a0 f0 f3 22 25 b7 3a a4 bf bf f1 77 e1 c8 e9 b6 ..."%.:....w.... 00:22:29.455 00000170 99 ee 07 07 85 9e 97 25 bb 58 02 bf 09 62 21 e5 .......%.X...b!. 00:22:29.455 00000180 32 4c b8 f7 48 dc fa 78 ba 43 16 b6 da 5d 5f 67 2L..H..x.C...]_g 00:22:29.455 00000190 a1 f2 91 8d 2f 88 2b c6 b0 c7 3c 4d 5f d8 c7 24 ..../.+.........Ds..d.]I. 00:22:29.455 00000180 6b 40 7c 5f 50 15 11 4f 49 19 14 0c 5e 15 56 0b k@|_P..OI...^.V. 00:22:29.455 00000190 0b 53 b0 f1 ee e7 d6 3c b8 6b c0 c0 eb 51 62 5b .S.....<.k...Qb[ 00:22:29.455 000001a0 24 9f b1 ca 22 1f 4a a4 86 42 17 90 c0 45 05 8a $...".J..B...E.. 00:22:29.455 000001b0 ff be 14 88 65 e7 bd 48 e5 42 0a b5 69 c4 47 8f ....e..H.B..i.G. 00:22:29.455 000001c0 25 74 01 09 f9 d9 2a 14 cd 74 21 ba cf 63 d1 08 %t....*..t!..c.. 00:22:29.455 000001d0 c3 22 e6 40 ac f5 e3 2b 51 e9 5d 51 ac ef df 1e .".@...+Q.]Q.... 00:22:29.455 000001e0 9b 10 ba 55 d3 9c cf 1f e3 2d cf 90 57 e2 65 da ...U.....-..W.e. 00:22:29.455 000001f0 8e 40 3d ba 13 0f be 9d 0e c5 d1 40 06 98 f8 ed .@=........@.... 00:22:29.455 00000200 9f b0 90 cd c6 8c e7 1f 6a d7 23 e5 c6 b0 82 88 ........j.#..... 00:22:29.455 00000210 59 3b 8a 08 67 7b 5f dc 79 16 11 ab f8 a9 a0 a4 Y;..g{_.y....... 00:22:29.455 00000220 56 e2 4d cc f8 6a 48 09 37 c2 fb 2f a4 3a 8b 45 V.M..jH.7../.:.E 00:22:29.455 00000230 59 26 ac 47 e8 b9 22 ba 41 d3 5a de 4c ac b9 b7 Y&.G..".A.Z.L... 00:22:29.455 00000240 c4 46 25 25 b9 b1 49 7c ef 0c 2b 55 c6 0e 46 22 .F%%..I|..+U..F" 00:22:29.455 00000250 8a cd c7 f9 df dc 06 ea a3 74 6f 60 6e c8 90 06 .........to`n... 00:22:29.455 00000260 e6 5c 9d 02 ef 32 d7 34 03 0b 70 03 4f cf 7c 11 .\...2.4..p.O.|. 00:22:29.455 00000270 77 33 8f d1 61 d0 81 d9 09 fe d7 ee 60 a1 8f f9 w3..a.......`... 00:22:29.455 00000280 dc 71 e0 e1 21 f3 38 71 53 aa e1 25 e2 a8 41 d0 .q..!.8qS..%..A. 00:22:29.455 00000290 70 17 c8 51 ad be 9a c5 1b 86 9b ae 71 b6 a0 c4 p..Q........q... 00:22:29.455 000002a0 5b 90 6c 66 b0 ff ff b4 73 b1 8e 6d 7f 39 45 87 [.lf....s..m.9E. 00:22:29.455 000002b0 36 5e 49 ef ed c3 f5 c5 e8 ea a1 b6 fe e7 82 47 6^I............G 00:22:29.455 000002c0 95 72 a0 2f 4b 5e 04 be 04 00 50 bf 43 52 3b 18 .r./K^....P.CR;. 00:22:29.455 000002d0 a9 62 32 68 c3 e3 e0 20 2c 7c 7a b8 34 a4 ed 22 .b2h... ,|z.4.." 00:22:29.455 000002e0 dd 2a ee 40 a7 57 9d 01 5e f7 21 8f 24 c7 1f a6 .*.@.W..^.!.$... 00:22:29.455 000002f0 19 a7 b0 98 b8 a1 02 ae e9 8b 26 2f b3 79 c5 43 ..........&/.y.C 00:22:29.455 dh secret: 00:22:29.455 00000000 1e ad 61 db fc 30 19 9f 55 8e 8e d5 0c a7 a5 2e ..a..0..U....... 00:22:29.455 00000010 be 11 59 98 f4 d5 d9 41 5f 3e 5e 72 05 f4 82 89 ..Y....A_>^r.... 00:22:29.455 00000020 0b 57 ac 2c 87 13 be 9d f7 b4 ed 17 0a e5 f4 16 .W.,............ 00:22:29.455 00000030 51 7c 03 d8 13 ad c3 66 6b a0 8f d1 f7 da 45 db Q|.....fk.....E. 00:22:29.455 00000040 85 a2 87 e3 c5 96 00 25 df b1 9c ad 73 e5 21 c4 .......%....s.!. 00:22:29.455 00000050 d7 74 07 b5 b2 da 41 8c eb 0c 62 ff e1 d5 12 e9 .t....A...b..... 00:22:29.455 00000060 7d 0f 5c 88 b1 40 e0 e6 c1 55 56 2a 2a dc 8b 53 }.\..@...UV**..S 00:22:29.455 00000070 f3 18 80 dc 07 fc 09 e1 a3 48 68 43 75 0f 23 ed .........HhCu.#. 00:22:29.455 00000080 07 59 c1 46 f7 fe ea 11 fe 8b 81 5e d9 84 c1 a0 .Y.F.......^.... 00:22:29.455 00000090 92 07 e8 ed 72 8b 15 71 31 7a 83 01 d1 2b 06 32 ....r..q1z...+.2 00:22:29.455 000000a0 5e 17 2c 8a 2c fd 8b a8 f5 e8 9a b0 15 e1 37 b0 ^.,.,.........7. 00:22:29.455 000000b0 cf b0 57 f0 16 6d a6 bc e2 d8 11 91 a6 53 49 25 ..W..m.......SI% 00:22:29.455 000000c0 66 5d 45 51 6d 58 4d 42 4b f0 7b 28 8b 7d 92 64 f]EQmXMBK.{(.}.d 00:22:29.455 000000d0 f9 d6 77 a4 08 11 09 5d 62 39 e6 bd 16 94 4c 91 ..w....]b9....L. 00:22:29.455 000000e0 5e 3c f2 4d e2 2a 40 b7 2c de ea 2d 40 ef ba 30 ^<.M.*@.,..-@..0 00:22:29.455 000000f0 41 4e 38 04 52 02 25 af 1e 65 c0 97 57 d0 2a 33 AN8.R.%..e..W.*3 00:22:29.455 00000100 38 da 37 51 db fb 21 9e ba 10 6c 25 eb 0d 27 ec 8.7Q..!...l%..'. 00:22:29.455 00000110 1e 26 62 67 62 ce 6e 13 af a7 16 1b 9a 0d 92 17 .&bgb.n......... 00:22:29.455 00000120 58 65 1a 80 dc 6c 38 05 0a 31 e4 ed b2 36 f2 8b Xe...l8..1...6.. 00:22:29.455 00000130 06 8f 99 aa 09 86 4f 37 75 fc d5 a7 e3 97 96 50 ......O7u......P 00:22:29.455 00000140 b6 a0 89 0d 89 b9 c3 97 02 78 45 62 b5 e7 ae db .........xEb.... 00:22:29.455 00000150 d2 eb c8 b0 d3 63 54 82 42 1e e9 8c a5 b5 36 89 .....cT.B.....6. 00:22:29.455 00000160 02 de ca 86 4c 49 82 1d 0e 2d bb dc c2 06 50 94 ....LI...-....P. 00:22:29.455 00000170 81 77 b3 fb 5b ba ec 09 f1 98 f4 70 b3 76 a5 87 .w..[......p.v.. 00:22:29.455 00000180 31 63 36 70 b5 79 5f da fb 59 4a 70 f3 4b c8 9f 1c6p.y_..YJp.K.. 00:22:29.455 00000190 81 df aa 6a 35 5a 9c b1 ee 94 7d 1c 53 95 66 1c ...j5Z....}.S.f. 00:22:29.455 000001a0 93 a3 01 4b 9d 65 67 92 36 1f da 50 56 3b 56 7d ...K.eg.6..PV;V} 00:22:29.455 000001b0 10 66 e5 ee fc b9 e6 1a 79 ca e9 1b 6c 49 28 f2 .f......y...lI(. 00:22:29.455 000001c0 8a ee cf 81 5c be 66 fb 6e cb 39 3d 0e 9a 2b 53 ....\.f.n.9=..+S 00:22:29.455 000001d0 5c ce 69 69 42 38 55 44 63 3d 88 e5 40 19 82 0e \.iiB8UDc=..@... 00:22:29.455 000001e0 ac ec 29 23 df 57 ca c4 34 c5 df 5f 18 9a 54 27 ..)#.W..4.._..T' 00:22:29.455 000001f0 c1 99 8d c5 87 45 9a 09 d8 32 0d 70 e6 d1 9b 2d .....E...2.p...- 00:22:29.455 00000200 42 1a 55 3c fb 53 6a 52 88 97 7a c2 2a 0e b8 e0 B.U<.SjR..z.*... 00:22:29.455 00000210 6e a1 32 71 36 c1 7f 2a 74 4a 29 0b 62 c4 51 b6 n.2q6..*tJ).b.Q. 00:22:29.455 00000220 3b cb 5c a7 03 3f 03 b3 d4 af 62 42 97 b3 07 da ;.\..?....bB.... 00:22:29.455 00000230 e1 87 e9 b8 70 ca 67 53 3c 28 ae f9 0d 24 46 b1 ....p.gS<(...$F. 00:22:29.455 00000240 1b bb 22 c9 10 b7 5b df e3 b3 35 a7 85 ca 4b 71 .."...[...5...Kq 00:22:29.455 00000250 0a 1e de 16 c3 f9 31 49 0b 5c 12 3b 37 d6 28 d2 ......1I.\.;7.(. 00:22:29.455 00000260 d8 64 b1 c4 c3 9e 20 b2 a4 8f ac 1d 2b 5f ea 35 .d.... .....+_.5 00:22:29.455 00000270 33 7b dd fa 96 7d 47 23 d7 11 2d 99 46 b9 1a 63 3{...}G#..-.F..c 00:22:29.455 00000280 69 ed 8c 20 d8 00 71 c8 db 7b 2d 31 e5 d2 81 99 i.. ..q..{-1.... 00:22:29.456 00000290 a2 c0 de db 98 50 bd 08 1b 4c a7 69 86 08 6f 2e .....P...L.i..o. 00:22:29.456 000002a0 1c 6d ac 09 9f 28 a2 56 c7 cd f6 69 22 e0 e3 b1 .m...(.V...i"... 00:22:29.456 000002b0 a9 2c e6 73 90 20 6e c3 27 3e db d5 0b 5a bc 74 .,.s. n.'>...Z.t 00:22:29.456 000002c0 67 8b c3 6c 98 5c 14 48 1b ed 4f 11 78 60 5d 4f g..l.\.H..O.x`]O 00:22:29.456 000002d0 fe 6f 79 f7 95 53 88 15 c8 93 94 f3 30 00 5e 01 .oy..S......0.^. 00:22:29.456 000002e0 51 9e fc 03 7f af 6a bd b2 ab c7 0d 8e 87 70 50 Q.....j.......pP 00:22:29.456 000002f0 1a 10 65 9b 12 df 38 ec 9c 86 ce 72 4d d6 96 c2 ..e...8....rM... 00:22:29.456 [2024-09-27 13:27:06.586399] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key0, hash=1, dhgroup=4, seq=3775755203, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.456 [2024-09-27 13:27:06.586803] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.456 [2024-09-27 13:27:06.641232] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.456 [2024-09-27 13:27:06.641564] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.456 [2024-09-27 13:27:06.641808] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.456 [2024-09-27 13:27:06.642002] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.456 [2024-09-27 13:27:06.692716] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.456 [2024-09-27 13:27:06.693023] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.456 [2024-09-27 13:27:06.693125] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.456 [2024-09-27 13:27:06.693254] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.456 [2024-09-27 13:27:06.693461] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.456 ctrlr pubkey: 00:22:29.456 00000000 3b 66 ec 7b 12 8f ea 9c da d8 1e 53 16 5d 7f c0 ;f.{.......S.].. 00:22:29.456 00000010 54 c2 df f2 17 b6 0b 6c dc 1c fe f5 98 ba e5 d5 T......l........ 00:22:29.456 00000020 c5 6e 25 48 fd 2e de eb 50 9a cf 50 b2 56 28 65 .n%H....P..P.V(e 00:22:29.456 00000030 83 db 2a 1b 59 d6 7e 1d f1 e6 ff 16 bd c7 a2 de ..*.Y.~......... 00:22:29.456 00000040 c6 7d be 69 f7 32 e4 a0 2a 90 63 8b a0 04 56 f8 .}.i.2..*.c...V. 00:22:29.456 00000050 30 82 86 1c 11 72 00 f9 84 5c 62 5f 06 12 3a 8c 0....r...\b_..:. 00:22:29.456 00000060 ea a7 d3 c4 2a 0a a1 a7 f0 66 74 0c db dc b8 a0 ....*....ft..... 00:22:29.456 00000070 2a 2e d9 8a a4 40 88 e7 15 a0 95 83 4c ce 54 3f *....@......L.T? 00:22:29.456 00000080 7c 49 cb ea a9 0a d9 6a d3 a9 9e 01 89 1f 92 f4 |I.....j........ 00:22:29.456 00000090 e4 4a 4a ff 55 3d ef 52 5e d3 5d df 33 5d 9d 73 .JJ.U=.R^.].3].s 00:22:29.456 000000a0 02 b8 3e b2 c8 e1 7c 24 6d aa db 5b 42 05 a5 ba ..>...|$m..[B... 00:22:29.456 000000b0 e1 41 da 2b 44 f7 da 96 94 35 c5 cd 2f fd 65 b7 .A.+D....5../.e. 00:22:29.456 000000c0 e9 bc d2 00 58 3f f5 c4 26 f5 93 24 94 14 3c 3a ....X?..&..$..<: 00:22:29.456 000000d0 01 09 63 b0 34 f8 49 99 ae c1 8e f1 f3 f7 26 16 ..c.4.I.......&. 00:22:29.456 000000e0 15 8a aa 93 c5 ff 96 58 1b b1 c8 25 35 32 ac 6f .......X...%52.o 00:22:29.456 000000f0 d8 c2 09 4d 17 43 df e3 09 87 e1 48 55 a5 08 35 ...M.C.....HU..5 00:22:29.456 00000100 2e 2b d9 4b ae 95 c7 56 30 10 69 93 ab ff 23 90 .+.K...V0.i...#. 00:22:29.456 00000110 bc 72 a9 04 77 b1 ca f8 e2 5b a1 7c 65 87 05 42 .r..w....[.|e..B 00:22:29.456 00000120 6c 0e f2 96 f2 0d 75 c4 94 bb 8c ca 2e 73 f8 77 l.....u......s.w 00:22:29.456 00000130 f7 9d bd 52 2f 0b bf 54 d5 d1 37 17 7a 94 3c 81 ...R/..T..7.z.<. 00:22:29.456 00000140 be 91 ba 65 e0 36 fe d3 e1 8f 07 aa ac 3f ad 8c ...e.6.......?.. 00:22:29.456 00000150 61 2a 1e 64 a6 23 dd 75 85 a1 bb 02 08 a0 cd 67 a*.d.#.u.......g 00:22:29.456 00000160 a0 f0 f3 22 25 b7 3a a4 bf bf f1 77 e1 c8 e9 b6 ..."%.:....w.... 00:22:29.456 00000170 99 ee 07 07 85 9e 97 25 bb 58 02 bf 09 62 21 e5 .......%.X...b!. 00:22:29.456 00000180 32 4c b8 f7 48 dc fa 78 ba 43 16 b6 da 5d 5f 67 2L..H..x.C...]_g 00:22:29.456 00000190 a1 f2 91 8d 2f 88 2b c6 b0 c7 3c 4d 5f d8 c7 24 ..../.+......7..U.. 00:22:29.456 00000020 07 83 13 b0 5c 2d 5e 1f 5b f5 35 f8 ea df 3c f3 ....\-^.[.5...<. 00:22:29.456 00000030 99 33 5e 45 95 34 71 40 6d d5 85 bd 97 5c 75 37 .3^E.4q@m....\u7 00:22:29.456 00000040 d3 b7 c5 31 8e f8 b4 13 67 02 fb 63 e6 d8 88 45 ...1....g..c...E 00:22:29.456 00000050 84 7c da d0 03 e9 23 13 f8 18 ba 5b 58 78 81 49 .|....#....[Xx.I 00:22:29.456 00000060 e7 8e 22 b1 c3 98 6d 85 a1 ac 00 ba 96 28 a3 fe .."...m......(.. 00:22:29.456 00000070 d1 ea e4 8d 41 06 a3 2e 3f 56 6a e0 5f 7e 91 04 ....A...?Vj._~.. 00:22:29.456 00000080 43 a2 f4 57 4c a2 56 98 dd cf 88 4e 43 90 c1 eb C..WL.V....NC... 00:22:29.456 00000090 4b e9 f1 55 d6 11 0b 5f 28 bd f6 b0 11 70 2a 1e K..U..._(....p*. 00:22:29.456 000000a0 c3 d4 9c 54 90 f7 e1 5f 92 96 da 4b 02 e0 6f d0 ...T..._...K..o. 00:22:29.456 000000b0 82 5e 15 cd f9 a9 dc a9 3c 8f 6f 75 b1 a6 b2 da .^......<.ou.... 00:22:29.456 000000c0 8f 53 a4 f4 81 bc 68 64 a4 b1 64 1e 42 4c 11 26 .S....hd..d.BL.& 00:22:29.456 000000d0 64 59 db af 10 94 fb e0 cf 7e 90 6e 4b 09 ed e5 dY.......~.nK... 00:22:29.456 000000e0 30 7d 9a e7 e0 29 7b 2f 2b 31 fa 0a 50 61 ad 4a 0}...){/+1..Pa.J 00:22:29.456 000000f0 65 23 1d 94 db 9a 4c 01 3e 51 27 86 20 45 47 53 e#....L.>Q'. EGS 00:22:29.456 00000100 65 25 93 f3 cb 7d 2c c3 0e 3d 10 df 39 31 b4 33 e%...},..=..91.3 00:22:29.456 00000110 20 84 7b 61 de 72 d8 10 bb 6a 1e b3 9a 62 86 4b .{a.r...j...b.K 00:22:29.456 00000120 14 be f8 94 19 65 8f d7 48 e2 1d a6 ab e1 0e 33 .....e..H......3 00:22:29.456 00000130 2f b3 0d e4 65 11 e3 b4 25 b3 fe 2a 02 88 3e 74 /...e...%..*..>t 00:22:29.456 00000140 08 bc 85 df a0 d9 c1 bd 74 33 e9 59 0f fd aa 1e ........t3.Y.... 00:22:29.456 00000150 a8 48 b4 7c 0a b0 89 3d ec f6 f8 9c 50 2f 17 b6 .H.|...=....P/.. 00:22:29.456 00000160 57 c8 2a 6b 03 41 50 b2 a2 16 1c da f2 a2 f4 e4 W.*k.AP......... 00:22:29.456 00000170 45 e2 f5 69 06 60 02 12 94 3a 68 4e ee 6e c8 ba E..i.`...:hN.n.. 00:22:29.456 00000180 7c 55 d7 f0 12 43 b1 30 ff e9 cf cd 3f 60 f2 91 |U...C.0....?`.. 00:22:29.456 00000190 87 32 01 ca 34 82 60 08 12 ae 4a 52 0a ee 02 4f .2..4.`...JR...O 00:22:29.456 000001a0 e7 86 6a b7 ae d4 b7 09 a8 7f 0f 1d c7 76 31 d3 ..j..........v1. 00:22:29.456 000001b0 8b 5e 4c dc 59 8b ac a9 4e ea f4 ce a3 c5 3c 72 .^L.Y...N.....aW......?]....U 00:22:29.456 00000290 be 57 4a 9e e5 07 aa f3 8c 43 6b bc bc 90 37 cd .WJ......Ck...7. 00:22:29.456 000002a0 f3 23 d7 bf ab 43 16 49 3d c9 d0 1c bb 8d c6 63 .#...C.I=......c 00:22:29.456 000002b0 fd 45 4e 4d a5 52 2d ca db 9e 6a b7 ae 41 69 32 .ENM.R-...j..Ai2 00:22:29.456 000002c0 4d c4 8d 1b 27 4e db d4 85 e7 83 c2 ee 26 7d 14 M...'N.......&}. 00:22:29.456 000002d0 0b 02 c6 53 3c d3 72 53 e1 77 35 d9 47 e0 a0 8d ...S<.rS.w5.G... 00:22:29.456 000002e0 f9 37 71 96 ac 90 eb 28 b5 c1 34 db d6 38 ba 58 .7q....(..4..8.X 00:22:29.456 000002f0 72 14 91 0d e2 58 c3 36 c8 77 23 05 f5 f8 fd 03 r....X.6.w#..... 00:22:29.456 dh secret: 00:22:29.456 00000000 eb 50 46 4f 4f a5 4d 10 08 6b 1f b5 0b dc 63 1f .PFOO.M..k....c. 00:22:29.456 00000010 04 ca fd 53 e8 91 05 dd 85 07 17 ab 6c 43 7b a4 ...S........lC{. 00:22:29.456 00000020 6d 4d a1 08 8f c8 e5 25 1c 62 f1 3d 8c 53 a6 84 mM.....%.b.=.S.. 00:22:29.456 00000030 ef 74 b7 17 89 51 10 fa 28 c6 ec 0f 4a 8f 0c f4 .t...Q..(...J... 00:22:29.456 00000040 65 4e 90 06 8e 77 02 3e 0d 46 2f 90 25 64 26 fd eN...w.>.F/.%d&. 00:22:29.456 00000050 50 f7 fe 50 93 2c b1 14 10 eb 12 30 cf e6 32 0b P..P.,.....0..2. 00:22:29.456 00000060 93 69 90 51 de 20 5f b3 18 29 7c fe 0b c7 63 61 .i.Q. _..)|...ca 00:22:29.456 00000070 fc 26 8d 95 62 ca 7b 1a 3b 79 c7 9f 0b 4b 83 4d .&..b.{.;y...K.M 00:22:29.456 00000080 1a f5 53 a7 3a 0b b4 1f b9 f0 fa 3f 71 f8 bf 5a ..S.:......?q..Z 00:22:29.456 00000090 1b 2a 30 5b cc 9f f0 df 9a 03 e1 9a e3 cc 73 2e .*0[..........s. 00:22:29.456 000000a0 47 8b 97 13 85 37 f2 b5 48 87 7c a1 f5 76 6e 31 G....7..H.|..vn1 00:22:29.456 000000b0 d4 64 f6 2d 06 9b d7 ef 47 29 e6 ba 52 97 ad a6 .d.-....G)..R... 00:22:29.456 000000c0 9a 1e 53 a5 e1 c0 3f 54 ae a9 69 5d bf 63 90 61 ..S...?T..i].c.a 00:22:29.456 000000d0 50 26 3d 44 32 6d fb 4f d2 5c 1a 65 c6 21 87 87 P&=D2m.O.\.e.!.. 00:22:29.457 000000e0 69 46 6a a0 82 7e 93 04 95 92 3a 77 34 4d 80 36 iFj..~....:w4M.6 00:22:29.457 000000f0 f0 71 46 57 ca b0 d3 18 db 1a 74 b1 9a 13 1f 44 .qFW......t....D 00:22:29.457 00000100 3a b8 27 77 bc 2d df 53 9d 6f 01 9f 72 23 37 63 :.'w.-.S.o..r#7c 00:22:29.457 00000110 45 46 1a a2 a4 ba f0 9e 80 fb 68 08 8e 33 cc 5e EF........h..3.^ 00:22:29.457 00000120 e9 e4 3d 8c 91 26 57 6a ca 9e 00 ab 9d 97 3e f9 ..=..&Wj......>. 00:22:29.457 00000130 a9 df d5 7a c4 a2 c6 cc 76 c9 d1 bb 70 15 67 5b ...z....v...p.g[ 00:22:29.457 00000140 ad 22 29 c5 9e fe dc 07 ef a9 6e 1a 3c 6f ad be .").......n.9.......\.[./.. 00:22:29.457 000001c0 17 f8 9b 78 9d 07 86 ff 93 d9 61 81 be 72 b1 36 ...x......a..r.6 00:22:29.457 000001d0 b1 87 a4 60 b4 3a 9d 88 57 05 6b a7 da 07 1f 32 ...`.:..W.k....2 00:22:29.457 000001e0 a4 f1 33 af 63 21 1d 62 10 54 a1 1f 50 ce 96 ab ..3.c!.b.T..P... 00:22:29.457 000001f0 2c c5 9e 1b 8d c8 bc 5d 8e ce 6c 1e 23 ff 1a f4 ,......]..l.#... 00:22:29.457 00000200 d6 aa 2a 29 db bf a1 21 28 02 ad 0c 96 6b d3 76 ..*)...!(....k.v 00:22:29.457 00000210 6f 8e f0 5d f8 93 98 bd 9a 8d 3c 4c 83 10 85 3e o..]...... 00:22:29.457 00000220 73 f1 3b bc eb f9 33 52 be b5 1a f0 27 ed 75 96 s.;...3R....'.u. 00:22:29.457 00000230 a5 d2 df 7e 2c cc 47 f0 b6 c3 43 5b 5b 89 5e e3 ...~,.G...C[[.^. 00:22:29.457 00000240 c5 26 ff cc 85 6a c4 42 79 ed ac 3b ad c1 a9 22 .&...j.By..;..." 00:22:29.457 00000250 76 f5 82 60 ff 5a dc c8 50 f4 04 91 b7 96 d8 10 v..`.Z..P....... 00:22:29.457 00000260 31 24 60 7b de 96 73 2b 5a b8 69 1f ce 3a f7 2c 1$`{..s+Z.i..:., 00:22:29.457 00000270 81 b3 6e 14 63 ab 4c a2 e4 94 5e 84 45 ef cb 83 ..n.c.L...^.E... 00:22:29.457 00000280 92 c2 ca d8 58 04 b8 e8 06 46 f6 b1 22 09 e3 40 ....X....F.."..@ 00:22:29.457 00000290 c2 ff 02 bd c3 c3 a7 69 ca 5f 47 39 d4 03 4b 8f .......i._G9..K. 00:22:29.457 000002a0 23 d2 5f bc ca f9 94 c3 92 58 45 f6 c0 fe e6 9a #._......XE..... 00:22:29.457 000002b0 b7 e3 71 f0 1e b6 81 af 23 e3 f8 d1 e1 56 9a 2c ..q.....#....V., 00:22:29.457 000002c0 de 80 76 69 c7 6f 55 91 b0 fc 39 95 b2 50 28 92 ..vi.oU...9..P(. 00:22:29.457 000002d0 90 8f ab 0b 3d a4 30 b9 81 b5 fd 7e 72 66 bf 19 ....=.0....~rf.. 00:22:29.457 000002e0 d3 dd 7b 6b 97 02 f4 70 c1 ba e9 50 e0 06 da a3 ..{k...p...P.... 00:22:29.457 000002f0 e0 5d 15 06 03 15 79 2a 5a 00 a8 7b d1 57 6c 0f .]....y*Z..{.Wl. 00:22:29.457 [2024-09-27 13:27:06.767272] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key0, hash=1, dhgroup=4, seq=3775755204, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.457 [2024-09-27 13:27:06.767551] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.457 [2024-09-27 13:27:06.817042] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.457 [2024-09-27 13:27:06.817502] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.457 [2024-09-27 13:27:06.817767] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.457 [2024-09-27 13:27:06.818009] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.457 [2024-09-27 13:27:06.946189] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.457 [2024-09-27 13:27:06.946455] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.457 [2024-09-27 13:27:06.946612] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.457 [2024-09-27 13:27:06.946828] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.457 [2024-09-27 13:27:06.947023] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.457 ctrlr pubkey: 00:22:29.457 00000000 17 20 b3 03 8f b5 93 cc 07 f4 04 a7 6e e9 f1 52 . ..........n..R 00:22:29.457 00000010 f7 08 92 4d 2a cb 2a 33 66 b0 83 ac 29 b1 d7 45 ...M*.*3f...)..E 00:22:29.457 00000020 05 e0 39 ed 7b f0 20 04 e2 4d a1 85 55 ac 1f d6 ..9.{. ..M..U... 00:22:29.457 00000030 98 e2 a0 67 20 e8 08 37 d7 d5 ae 8a 3d f9 35 4d ...g ..7....=.5M 00:22:29.457 00000040 4c 3a 44 9c c2 e5 5b b0 c6 29 91 70 a6 c1 3b 25 L:D...[..).p..;% 00:22:29.457 00000050 29 f6 f8 c0 13 0b 97 27 b8 ba 1f 71 6b 71 89 ed )......'...qkq.. 00:22:29.457 00000060 6e 9c df c2 4f a7 92 6f 11 0b 5b 2f 4e bc 88 7b n...O..o..[/N..{ 00:22:29.457 00000070 6b ed 99 c8 9b 91 a0 b4 1d 30 ce 7e cf 64 fe cc k........0.~.d.. 00:22:29.457 00000080 0a bb 55 98 77 ba 23 67 6f 14 cd 92 b6 6d 2e 22 ..U.w.#go....m." 00:22:29.457 00000090 2d 72 02 05 26 03 e1 c2 bb f9 8a 58 d4 1f b3 08 -r..&......X.... 00:22:29.457 000000a0 6f 1a 91 48 d2 c7 97 9f f6 bb 66 12 47 d0 6c 46 o..H......f.G.lF 00:22:29.457 000000b0 ec e3 cd a8 ad a5 d9 d5 9a b4 27 c0 a2 12 b9 95 ..........'..... 00:22:29.457 000000c0 86 d4 a7 43 dd db 73 f6 bc 4a 73 0b 77 96 1d a1 ...C..s..Js.w... 00:22:29.457 000000d0 ca 69 02 79 c3 29 56 11 4b 92 ce 1c 7e a4 82 f1 .i.y.)V.K...~... 00:22:29.457 000000e0 2f 26 d6 d6 39 55 04 86 ab a5 8f 35 af 3c 6b 4f /&..9U.....5.xI..>O[!'5.XTw. 00:22:29.457 00000110 ad c3 43 81 df 9b 78 ff 5c fa 00 e6 3f 89 bc 71 ..C...x.\...?..q 00:22:29.457 00000120 5d 53 e9 e1 4b c9 1c d9 c3 33 b4 7e e8 97 37 f8 ]S..K....3.~..7. 00:22:29.457 00000130 9c 53 fc ea 72 96 93 5d 02 2a a9 da c4 b6 c6 80 .S..r..].*...... 00:22:29.457 00000140 ab 6b 27 74 ca 88 d7 6f 00 1e f5 12 a1 c4 9a 7f .k't...o........ 00:22:29.457 00000150 66 4f c3 b4 83 04 d6 10 ce 80 90 1d 3a 21 c2 d1 fO..........:!.. 00:22:29.457 00000160 22 3f 4a d7 d9 ec a8 8e fa 28 c4 93 bc dc 2d ad "?J......(....-. 00:22:29.457 00000170 a5 05 e0 28 10 b5 29 1b c8 43 c6 5d 5d 4a b0 b2 ...(..)..C.]]J.. 00:22:29.457 00000180 13 e0 e4 c8 4d 8b 5d f4 1a f2 aa b1 3d b4 5f 42 ....M.].....=._B 00:22:29.457 00000190 23 ce 21 02 ce a5 e3 04 84 46 67 81 e7 61 40 be #.!......Fg..a@. 00:22:29.457 000001a0 ae b3 39 f6 3e d9 47 e1 ae fa 06 ce aa 66 6f c5 ..9.>.G......fo. 00:22:29.457 000001b0 87 c4 1c 2a c0 11 89 2e 03 d7 30 15 34 49 e4 ec ...*......0.4I.. 00:22:29.457 000001c0 f6 47 09 d2 cd cd d6 57 e0 d7 39 d6 b8 94 c5 07 .G.....W..9..... 00:22:29.457 000001d0 8a 2b 67 5b d3 09 b6 b7 33 9b 16 14 69 c5 e9 af .+g[....3...i... 00:22:29.457 000001e0 8e 51 40 40 ac 56 36 c2 26 46 06 41 9b 0a b3 f9 .Q@@.V6.&F.A.... 00:22:29.457 000001f0 83 c8 e0 63 30 bc ac 81 3c 9e 14 6d fb 06 35 b2 ...c0...<..m..5. 00:22:29.457 00000200 0f 23 6d 8b 3e 88 5d de d8 85 72 0b e0 ac 57 82 .#m.>.]...r...W. 00:22:29.457 00000210 84 ce df 23 00 e3 d9 d4 2b 0a 2f 2d 1c d7 78 b4 ...#....+./-..x. 00:22:29.457 00000220 3f 37 9a c1 f0 f9 0e dd 89 e6 e2 9f 2c 00 ef 0c ?7..........,... 00:22:29.457 00000230 6a 0f 32 83 7b 6a 9e 2a ea 2f 9f ae 30 9a af 3c j.2.{j.*./..0..< 00:22:29.457 00000240 2f fb d2 58 79 9c a3 02 ce 08 3a d1 34 54 54 14 /..Xy.....:.4TT. 00:22:29.457 00000250 e8 d3 0d bc 4e 7f 5f 06 74 2a 4e 44 e9 21 07 bd ....N._.t*ND.!.. 00:22:29.457 00000260 50 04 7d b8 be af ab ce 5d 1d db c5 98 b2 8c 6b P.}.....]......k 00:22:29.457 00000270 b7 9b 88 03 dc 0f 96 37 00 61 7f a9 28 57 80 ba .......7.a..(W.. 00:22:29.457 00000280 c2 84 6d fa af 00 40 4b 78 de 65 c9 a6 8b 40 1c ..m...@Kx.e...@. 00:22:29.457 00000290 05 56 2c 54 2e 0a 43 61 1a f0 5f 18 5e 9f 6e d6 .V,T..Ca.._.^.n. 00:22:29.457 000002a0 d0 19 d6 a9 ee 4d 05 1e 59 d4 81 89 96 fe 87 0c .....M..Y....... 00:22:29.457 000002b0 f5 6a 25 9a a7 49 e1 c9 3e 0b e3 70 7c e9 b7 ba .j%..I..>..p|... 00:22:29.457 000002c0 21 8e f0 59 92 16 98 19 04 ce fe 8a 80 f0 fd 7c !..Y...........| 00:22:29.457 000002d0 bc 9a 21 93 0a 53 2a f9 6f 6b 83 03 27 72 82 58 ..!..S*.ok..'r.X 00:22:29.457 000002e0 57 bc 2a 0a fc c2 ca e8 c6 8e 22 a2 a0 ab f6 5b W.*......."....[ 00:22:29.457 000002f0 73 78 67 6c 0f 75 d2 95 62 99 4d a1 0d 94 e1 7c sxgl.u..b.M....| 00:22:29.457 host pubkey: 00:22:29.457 00000000 7b 9a b4 49 cf b4 9b c9 d9 b2 9e 83 2d 87 10 37 {..I........-..7 00:22:29.457 00000010 e1 f7 1b 1a 37 75 f4 2f c1 1e db 30 a4 cb 79 89 ....7u./...0..y. 00:22:29.457 00000020 10 88 7d 10 d0 6d f7 b4 b9 f1 a5 0f 11 79 c8 61 ..}..m.......y.a 00:22:29.457 00000030 97 11 e5 56 93 8a af 60 04 4d 05 da b8 a5 42 07 ...V...`.M....B. 00:22:29.457 00000040 51 50 6e 3d fc 63 54 dc da 85 bf 02 a4 5c 0c d5 QPn=.cT......\.. 00:22:29.457 00000050 33 5d 56 77 88 54 6a e0 95 cf 02 5f f3 f7 ee 6b 3]Vw.Tj...._...k 00:22:29.457 00000060 40 ec c2 e4 a4 9d 7e 9f fe 45 65 b6 46 5a 0a 2a @.....~..Ee.FZ.* 00:22:29.457 00000070 ac 0c 1f 38 fb 18 53 2a 2e 70 7b 0f 43 44 a3 c0 ...8..S*.p{.CD.. 00:22:29.457 00000080 ab e6 e3 e3 bb 52 e7 ea b9 28 59 25 10 a8 32 d2 .....R...(Y%..2. 00:22:29.457 00000090 e8 83 27 46 43 6b 30 d5 2f 33 2c b5 c5 02 0a c9 ..'FCk0./3,..... 00:22:29.457 000000a0 3a 0c 52 89 af 3f 19 ac b5 4a d4 26 6e af e7 90 :.R..?...J.&n... 00:22:29.457 000000b0 6f bd cb d7 47 eb 68 1a 9e 05 11 9d 2c 0b fa bb o...G.h.....,... 00:22:29.457 000000c0 22 16 eb a7 93 f2 43 91 09 3e 94 c5 3b f2 fb ed ".....C..>..;... 00:22:29.457 000000d0 f3 11 fe b5 71 64 13 22 1a 4e 65 3a a5 4c bd 61 ....qd.".Ne:.L.a 00:22:29.457 000000e0 1b 4a ea 75 80 00 9e 97 15 79 3d c5 fb a9 28 8d .J.u.....y=...(. 00:22:29.457 000000f0 ec de c3 fb 4f 63 51 91 48 65 01 a1 d9 ee 7f ca ....OcQ.He...... 00:22:29.457 00000100 a2 21 de cd 5f 87 ac d2 9b f9 54 2a 9b 76 e1 93 .!.._.....T*.v.. 00:22:29.457 00000110 13 91 51 85 32 ba b7 d9 97 7b af e0 95 ca 2a fa ..Q.2....{....*. 00:22:29.457 00000120 5c 2d 35 ab 3d d5 96 ac dd 57 da 6e 4b 23 19 31 \-5.=....W.nK#.1 00:22:29.457 00000130 d9 f1 ee 92 48 ce 30 f4 b5 9f cd e6 71 fe 58 5e ....H.0.....q.X^ 00:22:29.457 00000140 50 9d a8 87 2d 66 fb a3 1b ad ba 07 4f cb b5 ed P...-f......O... 00:22:29.457 00000150 c0 9c e2 93 91 56 8b d0 e9 a9 3a 41 28 8b 80 d7 .....V....:A(... 00:22:29.457 00000160 5f 65 fc 84 d4 32 31 f0 bb 29 82 4a cf 7c e2 4f _e...21..).J.|.O 00:22:29.457 00000170 6e 77 bb 4a ec f5 69 6a f9 56 5e 77 78 27 a9 13 nw.J..ij.V^wx'.. 00:22:29.457 00000180 4d 0e 6f 8d 38 82 34 fe 06 04 cf 43 c1 e6 bb 00 M.o.8.4....C.... 00:22:29.457 00000190 99 3d 87 84 76 8a d1 86 68 96 fc 26 76 b0 2a 57 .=..v...h..&v.*W 00:22:29.457 000001a0 49 d5 76 06 ec 02 ef 8b e5 8e bc e5 ea 0b eb c2 I.v............. 00:22:29.457 000001b0 e8 61 9c 3d d1 8d 88 ad 88 57 27 4b d8 44 39 69 .a.=.....W'K.D9i 00:22:29.457 000001c0 4c c1 02 5e c7 28 50 3a 19 4c d1 1b 2c 3a 5e 97 L..^.(P:.L..,:^. 00:22:29.457 000001d0 99 f3 8c 04 52 d9 d2 f3 34 bd a2 7a 57 bb 7f f2 ....R...4..zW... 00:22:29.458 000001e0 fa 3e 94 55 cf 3d a2 60 5a e7 63 c6 41 d4 5c 3d .>.U.=.`Z.c.A.\= 00:22:29.458 000001f0 fb e4 42 6d bb aa 1c a0 34 3f 1d ae d8 68 56 57 ..Bm....4?...hVW 00:22:29.458 00000200 f9 73 52 6c 77 13 21 c2 5b 52 1b 51 46 61 06 8d .sRlw.!.[R.QFa.. 00:22:29.458 00000210 0d 79 db 85 af 29 7f b0 22 2c 07 44 8a 74 8b 4a .y...)..",.D.t.J 00:22:29.458 00000220 ac c4 2c 77 f4 7e 8f 9b d4 6f 61 64 d5 62 24 3c ..,w.~...oad.b$< 00:22:29.458 00000230 c3 9f 0d 9e bd ec 4d e9 37 e7 88 95 10 39 71 dd ......M.7....9q. 00:22:29.458 00000240 e7 18 13 61 43 12 29 55 49 bd f1 f8 6b a0 9a 20 ...aC.)UI...k.. 00:22:29.458 00000250 80 2f 0c 2d 2d 79 2d ca eb 5a d3 8e 68 6e 5a 40 ./.--y-..Z..hnZ@ 00:22:29.458 00000260 b7 1f 3d 4e 88 47 d6 cd 2f 47 57 c8 46 31 b4 20 ..=N.G../GW.F1. 00:22:29.458 00000270 3d 23 4a 80 80 3f 40 6b 8f 50 bc 7a 11 a4 7f a8 =#J..?@k.P.z.... 00:22:29.458 00000280 60 73 b3 c3 03 68 55 86 7b ae cf 74 54 6b ae 57 `s...hU.{..tTk.W 00:22:29.458 00000290 47 6c 68 ad 88 b4 ae 39 5d cf 5e 75 f4 3f e5 98 Glh....9].^u.?.. 00:22:29.458 000002a0 bd fe fa 3c e1 dc 04 12 76 8b 54 1c 2c 01 b4 c7 ...<....v.T.,... 00:22:29.458 000002b0 fd 17 dd f0 ac 03 0a 6d 1c d4 82 22 27 de 05 fc .......m..."'... 00:22:29.458 000002c0 d5 c7 34 f7 d7 60 1b b9 d9 d5 20 f6 69 5e 83 1f ..4..`.... .i^.. 00:22:29.458 000002d0 42 08 7e 62 d9 58 af e7 57 e0 36 8e ed 2e f8 04 B.~b.X..W.6..... 00:22:29.458 000002e0 23 6b b4 df e4 80 38 83 0a e8 d6 e6 0e 90 3d af #k....8.......=. 00:22:29.458 000002f0 cd 19 ed 60 32 2f c6 5a bb 1f f3 05 16 53 a4 17 ...`2/.Z.....S.. 00:22:29.458 dh secret: 00:22:29.458 00000000 c6 07 28 ac 8f 45 3c 7e 22 1d 66 c7 c2 17 5b bc ..(..E<~".f...[. 00:22:29.458 00000010 c1 48 13 3e 0e 92 8f 87 44 4c b8 f6 6f 1a 09 4e .H.>....DL..o..N 00:22:29.458 00000020 82 69 b0 69 50 7f f8 ad ff 3b 93 c5 44 5f 9a 10 .i.iP....;..D_.. 00:22:29.458 00000030 59 bd 17 37 78 1c 6c 56 2d ed e9 e0 98 ad be 53 Y..7x.lV-......S 00:22:29.458 00000040 9d 1d dc af a3 98 75 e1 25 3e c3 4d 15 d0 88 2e ......u.%>.M.... 00:22:29.458 00000050 0f bb c2 cd 04 0a a2 fa 6e 67 48 a0 79 7b 85 39 ........ngH.y{.9 00:22:29.458 00000060 93 20 03 c8 45 1a 76 24 b6 1f f2 b0 19 25 82 64 . ..E.v$.....%.d 00:22:29.458 00000070 90 50 cb 7f a0 67 fa 3a d3 7a c5 b4 01 2e a9 4a .P...g.:.z.....J 00:22:29.458 00000080 17 81 7a 1e 1d c4 c8 5a 22 47 1e 2b cd 1f ce f4 ..z....Z"G.+.... 00:22:29.458 00000090 f6 93 a2 f5 9e 07 b0 9e 72 74 cc d1 6b 82 13 82 ........rt..k... 00:22:29.458 000000a0 89 7b 48 ee e5 12 52 86 57 75 9d 5d c5 a7 8e 57 .{H...R.Wu.]...W 00:22:29.458 000000b0 a3 3b 21 5d 6a 8e fb f4 38 a9 2c d0 34 e6 0a a9 .;!]j...8.,.4... 00:22:29.458 000000c0 a1 ba 91 05 de 42 94 84 dc 31 d4 60 5e 64 12 c5 .....B...1.`^d.. 00:22:29.458 000000d0 36 9e 70 5f 99 85 b7 59 dd 91 83 48 5b c6 10 27 6.p_...Y...H[..' 00:22:29.458 000000e0 99 59 70 f2 43 aa 59 bf 12 40 83 ee 4e 8e e3 af .Yp.C.Y..@..N... 00:22:29.458 000000f0 ed 5b 44 57 7c be 1d 87 f1 7d aa 00 63 ad bd 92 .[DW|....}..c... 00:22:29.458 00000100 39 bc 55 f0 ad 6c 8d 12 0e 13 a5 43 44 ce 66 6e 9.U..l.....CD.fn 00:22:29.458 00000110 2e 85 0e df 34 01 43 5c 54 ce 1b e3 04 68 6b 0d ....4.C\T....hk. 00:22:29.458 00000120 77 51 ea fa 7b 8f 48 50 ec 3b 4c 97 fa c0 14 ad wQ..{.HP.;L..... 00:22:29.458 00000130 e1 a7 cd 33 55 4a 63 84 b1 c4 3c b7 68 7d 6c 13 ...3UJc...<.h}l. 00:22:29.458 00000140 e5 01 b0 66 d6 b7 ce 16 df cc ea c2 81 77 ea 2a ...f.........w.* 00:22:29.458 00000150 af f3 ec e9 bc 50 fe f5 0b db 52 d5 50 66 ea 37 .....P....R.Pf.7 00:22:29.458 00000160 1c b8 fe e5 d9 b7 2c 7f 91 a6 32 86 3c da ff f3 ......,...2.<... 00:22:29.458 00000170 cb 66 13 1d 41 f2 1f 32 67 04 ff 59 9d 96 b5 d4 .f..A..2g..Y.... 00:22:29.458 00000180 e0 8f d9 c0 51 6e a2 c8 7d 85 0d 48 38 01 e0 1a ....Qn..}..H8... 00:22:29.458 00000190 d3 a3 ca 7f ce a1 b0 ec 52 35 33 ec ad ff bf a3 ........R53..... 00:22:29.458 000001a0 f5 78 b3 b4 5b 31 29 f9 57 ff ff 8a ae c4 2f 20 .x..[1).W...../ 00:22:29.458 000001b0 05 3d 71 85 58 e8 eb a7 7d 1e 7c 12 e4 c3 f3 f1 .=q.X...}.|..... 00:22:29.458 000001c0 04 d5 f8 de a4 3f 79 c9 99 d6 0d ff 46 09 f8 00 .....?y.....F... 00:22:29.458 000001d0 cc ba 97 e4 ae c0 c7 6b fa ae 6b 49 52 5b 58 96 .......k..kIR[X. 00:22:29.458 000001e0 d6 13 35 80 97 a7 c4 f9 4f 0e b0 23 1a 34 ea 6e ..5.....O..#.4.n 00:22:29.458 000001f0 42 17 58 21 83 9a d3 4d 03 6d d3 ee a3 e4 c0 16 B.X!...M.m...... 00:22:29.458 00000200 84 07 46 16 b6 2d ed 21 49 42 2b a4 4e 00 36 2f ..F..-.!IB+.N.6/ 00:22:29.458 00000210 d7 59 8c 3b 88 67 9d a5 95 2a c2 21 fe ff e2 e9 .Y.;.g...*.!.... 00:22:29.458 00000220 1c f5 4e 0b da 78 15 bc e4 e1 ef 0a d4 a7 05 67 ..N..x.........g 00:22:29.458 00000230 78 da af 39 7e 54 bf a9 a1 27 fc 21 78 03 54 d2 x..9~T...'.!x.T. 00:22:29.458 00000240 3c ba 61 dd eb f3 c6 cb 1e 66 cb 4c 7c d9 79 98 <.a......f.L|.y. 00:22:29.458 00000250 e8 c5 c3 98 8d 16 ea 47 fd e2 b6 ba fe f6 36 17 .......G......6. 00:22:29.458 00000260 50 0a 65 1a e1 37 03 4b 2c 93 b3 ed b7 84 5d ea P.e..7.K,.....]. 00:22:29.458 00000270 4c a0 f7 1d 94 5b 41 ac d6 21 bc 9f 15 f2 65 8b L....[A..!....e. 00:22:29.458 00000280 f5 36 d1 b1 9f b4 2d 49 1a 54 d7 a7 ea 0d 3e 83 .6....-I.T....>. 00:22:29.458 00000290 d4 60 09 bb f8 ce c4 9a cd a8 09 67 5d 4f b1 0e .`.........g]O.. 00:22:29.458 000002a0 e5 3c df 28 78 5e 97 86 e0 00 cd 2c 20 9e 7f ae .<.(x^....., ... 00:22:29.458 000002b0 f8 6b 0c 4f f1 12 62 d8 69 15 c7 29 7d f0 68 1b .k.O..b.i..)}.h. 00:22:29.458 000002c0 58 09 60 57 3a 40 e3 11 8c 2f ff 7f c5 fe 77 c3 X.`W:@.../....w. 00:22:29.458 000002d0 be 56 c8 f7 37 86 c0 8a a6 73 d2 f3 ad cd 7d 78 .V..7....s....}x 00:22:29.458 000002e0 69 0d 60 62 17 e5 8c 99 f1 62 c1 74 a3 4c 94 08 i.`b.....b.t.L.. 00:22:29.458 000002f0 ad 4b 2e 98 c8 39 1b 6d 97 52 56 f2 5e d8 43 a7 .K...9.m.RV.^.C. 00:22:29.458 [2024-09-27 13:27:07.015376] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=1, dhgroup=4, seq=3775755205, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.458 [2024-09-27 13:27:07.015670] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.458 [2024-09-27 13:27:07.067904] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.458 [2024-09-27 13:27:07.068371] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.458 [2024-09-27 13:27:07.068647] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.458 [2024-09-27 13:27:07.068944] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.458 [2024-09-27 13:27:07.121143] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.458 [2024-09-27 13:27:07.121319] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.458 [2024-09-27 13:27:07.121478] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.458 [2024-09-27 13:27:07.121573] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.458 [2024-09-27 13:27:07.121851] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.458 ctrlr pubkey: 00:22:29.458 00000000 17 20 b3 03 8f b5 93 cc 07 f4 04 a7 6e e9 f1 52 . ..........n..R 00:22:29.458 00000010 f7 08 92 4d 2a cb 2a 33 66 b0 83 ac 29 b1 d7 45 ...M*.*3f...)..E 00:22:29.458 00000020 05 e0 39 ed 7b f0 20 04 e2 4d a1 85 55 ac 1f d6 ..9.{. ..M..U... 00:22:29.458 00000030 98 e2 a0 67 20 e8 08 37 d7 d5 ae 8a 3d f9 35 4d ...g ..7....=.5M 00:22:29.458 00000040 4c 3a 44 9c c2 e5 5b b0 c6 29 91 70 a6 c1 3b 25 L:D...[..).p..;% 00:22:29.458 00000050 29 f6 f8 c0 13 0b 97 27 b8 ba 1f 71 6b 71 89 ed )......'...qkq.. 00:22:29.458 00000060 6e 9c df c2 4f a7 92 6f 11 0b 5b 2f 4e bc 88 7b n...O..o..[/N..{ 00:22:29.458 00000070 6b ed 99 c8 9b 91 a0 b4 1d 30 ce 7e cf 64 fe cc k........0.~.d.. 00:22:29.458 00000080 0a bb 55 98 77 ba 23 67 6f 14 cd 92 b6 6d 2e 22 ..U.w.#go....m." 00:22:29.458 00000090 2d 72 02 05 26 03 e1 c2 bb f9 8a 58 d4 1f b3 08 -r..&......X.... 00:22:29.458 000000a0 6f 1a 91 48 d2 c7 97 9f f6 bb 66 12 47 d0 6c 46 o..H......f.G.lF 00:22:29.458 000000b0 ec e3 cd a8 ad a5 d9 d5 9a b4 27 c0 a2 12 b9 95 ..........'..... 00:22:29.458 000000c0 86 d4 a7 43 dd db 73 f6 bc 4a 73 0b 77 96 1d a1 ...C..s..Js.w... 00:22:29.458 000000d0 ca 69 02 79 c3 29 56 11 4b 92 ce 1c 7e a4 82 f1 .i.y.)V.K...~... 00:22:29.458 000000e0 2f 26 d6 d6 39 55 04 86 ab a5 8f 35 af 3c 6b 4f /&..9U.....5.xI..>O[!'5.XTw. 00:22:29.458 00000110 ad c3 43 81 df 9b 78 ff 5c fa 00 e6 3f 89 bc 71 ..C...x.\...?..q 00:22:29.458 00000120 5d 53 e9 e1 4b c9 1c d9 c3 33 b4 7e e8 97 37 f8 ]S..K....3.~..7. 00:22:29.458 00000130 9c 53 fc ea 72 96 93 5d 02 2a a9 da c4 b6 c6 80 .S..r..].*...... 00:22:29.458 00000140 ab 6b 27 74 ca 88 d7 6f 00 1e f5 12 a1 c4 9a 7f .k't...o........ 00:22:29.458 00000150 66 4f c3 b4 83 04 d6 10 ce 80 90 1d 3a 21 c2 d1 fO..........:!.. 00:22:29.458 00000160 22 3f 4a d7 d9 ec a8 8e fa 28 c4 93 bc dc 2d ad "?J......(....-. 00:22:29.458 00000170 a5 05 e0 28 10 b5 29 1b c8 43 c6 5d 5d 4a b0 b2 ...(..)..C.]]J.. 00:22:29.458 00000180 13 e0 e4 c8 4d 8b 5d f4 1a f2 aa b1 3d b4 5f 42 ....M.].....=._B 00:22:29.458 00000190 23 ce 21 02 ce a5 e3 04 84 46 67 81 e7 61 40 be #.!......Fg..a@. 00:22:29.458 000001a0 ae b3 39 f6 3e d9 47 e1 ae fa 06 ce aa 66 6f c5 ..9.>.G......fo. 00:22:29.458 000001b0 87 c4 1c 2a c0 11 89 2e 03 d7 30 15 34 49 e4 ec ...*......0.4I.. 00:22:29.458 000001c0 f6 47 09 d2 cd cd d6 57 e0 d7 39 d6 b8 94 c5 07 .G.....W..9..... 00:22:29.458 000001d0 8a 2b 67 5b d3 09 b6 b7 33 9b 16 14 69 c5 e9 af .+g[....3...i... 00:22:29.458 000001e0 8e 51 40 40 ac 56 36 c2 26 46 06 41 9b 0a b3 f9 .Q@@.V6.&F.A.... 00:22:29.458 000001f0 83 c8 e0 63 30 bc ac 81 3c 9e 14 6d fb 06 35 b2 ...c0...<..m..5. 00:22:29.458 00000200 0f 23 6d 8b 3e 88 5d de d8 85 72 0b e0 ac 57 82 .#m.>.]...r...W. 00:22:29.458 00000210 84 ce df 23 00 e3 d9 d4 2b 0a 2f 2d 1c d7 78 b4 ...#....+./-..x. 00:22:29.458 00000220 3f 37 9a c1 f0 f9 0e dd 89 e6 e2 9f 2c 00 ef 0c ?7..........,... 00:22:29.458 00000230 6a 0f 32 83 7b 6a 9e 2a ea 2f 9f ae 30 9a af 3c j.2.{j.*./..0..< 00:22:29.458 00000240 2f fb d2 58 79 9c a3 02 ce 08 3a d1 34 54 54 14 /..Xy.....:.4TT. 00:22:29.458 00000250 e8 d3 0d bc 4e 7f 5f 06 74 2a 4e 44 e9 21 07 bd ....N._.t*ND.!.. 00:22:29.458 00000260 50 04 7d b8 be af ab ce 5d 1d db c5 98 b2 8c 6b P.}.....]......k 00:22:29.458 00000270 b7 9b 88 03 dc 0f 96 37 00 61 7f a9 28 57 80 ba .......7.a..(W.. 00:22:29.458 00000280 c2 84 6d fa af 00 40 4b 78 de 65 c9 a6 8b 40 1c ..m...@Kx.e...@. 00:22:29.458 00000290 05 56 2c 54 2e 0a 43 61 1a f0 5f 18 5e 9f 6e d6 .V,T..Ca.._.^.n. 00:22:29.458 000002a0 d0 19 d6 a9 ee 4d 05 1e 59 d4 81 89 96 fe 87 0c .....M..Y....... 00:22:29.458 000002b0 f5 6a 25 9a a7 49 e1 c9 3e 0b e3 70 7c e9 b7 ba .j%..I..>..p|... 00:22:29.458 000002c0 21 8e f0 59 92 16 98 19 04 ce fe 8a 80 f0 fd 7c !..Y...........| 00:22:29.459 000002d0 bc 9a 21 93 0a 53 2a f9 6f 6b 83 03 27 72 82 58 ..!..S*.ok..'r.X 00:22:29.459 000002e0 57 bc 2a 0a fc c2 ca e8 c6 8e 22 a2 a0 ab f6 5b W.*......."....[ 00:22:29.459 000002f0 73 78 67 6c 0f 75 d2 95 62 99 4d a1 0d 94 e1 7c sxgl.u..b.M....| 00:22:29.459 host pubkey: 00:22:29.459 00000000 60 1d 9d 1d a3 24 90 34 04 62 65 87 3b 75 68 ea `....$.4.be.;uh. 00:22:29.459 00000010 a9 7a 80 28 a5 e0 d1 08 2a 2a 37 b9 c7 6d c5 15 .z.(....**7..m.. 00:22:29.459 00000020 38 92 d9 31 6e 8a 88 af 9d 4b 19 41 38 f1 05 fe 8..1n....K.A8... 00:22:29.459 00000030 43 1e 9c 7b e5 be 71 18 f6 f9 d5 38 95 11 c3 23 C..{..q....8...# 00:22:29.459 00000040 26 bc 80 8c 51 60 34 85 f3 e7 f4 92 1a 66 da 90 &...Q`4......f.. 00:22:29.459 00000050 3f 78 cd c1 49 18 a0 32 36 e6 1f 98 81 f9 e4 7c ?x..I..26......| 00:22:29.459 00000060 39 39 a5 49 b4 cd 67 bc 87 f6 a0 43 e1 c0 73 fc 99.I..g....C..s. 00:22:29.459 00000070 ec c4 5a 23 8f 3d c6 a2 06 af c0 20 1b 39 d2 e3 ..Z#.=..... .9.. 00:22:29.459 00000080 55 e8 6d ec e5 40 73 f5 2b 3b 8e ab 30 8c e6 dd U.m..@s.+;..0... 00:22:29.459 00000090 c0 b4 41 09 5c 24 a5 5e 7a 82 fb fc 2f ea 3b e2 ..A.\$.^z.../.;. 00:22:29.459 000000a0 9b e4 e2 09 28 d8 24 48 c9 b1 86 bc 4c d2 9b 0b ....(.$H....L... 00:22:29.459 000000b0 21 8e 54 3e 6c 0c 48 2a 80 be d0 c5 1b 5c 9b 24 !.T>l.H*.....\.$ 00:22:29.459 000000c0 35 f4 96 4a ae dc d4 a4 ac 92 27 26 23 83 55 df 5..J......'&#.U. 00:22:29.459 000000d0 2b ce bf ce cb 09 4b f9 46 2e 58 bf 07 3c 9f af +.....K.F.X..<.. 00:22:29.459 000000e0 fd 98 97 4d 1f fd 30 57 a9 2f 42 8d a1 53 19 77 ...M..0W./B..S.w 00:22:29.459 000000f0 5c 4b d5 e5 3c 9c eb e3 2d 3b df 42 54 20 21 db \K..<...-;.BT !. 00:22:29.459 00000100 2a 31 7d e5 f7 6b f0 11 2b 43 02 3d 25 40 83 9e *1}..k..+C.=%@.. 00:22:29.459 00000110 8c 52 ab 34 ff f7 2c df 37 ce 31 8c a6 fd f0 b2 .R.4..,.7.1..... 00:22:29.459 00000120 67 f8 cb a1 90 1b 32 d0 5a ff b8 8c 0b 31 51 d1 g.....2.Z....1Q. 00:22:29.459 00000130 c2 4b 00 3d 6a 02 11 2e 23 b3 01 9c d2 e5 89 b5 .K.=j...#....... 00:22:29.459 00000140 ed f6 38 2e 85 f3 21 0d 82 0c de 5c 27 4c 2d fb ..8...!....\'L-. 00:22:29.459 00000150 9c 44 f7 9b 7f e4 80 cd 79 b4 63 9a 64 26 2d db .D......y.c.d&-. 00:22:29.459 00000160 dc 4e 88 a2 33 0d d4 86 b3 57 d8 c4 66 e0 57 3d .N..3....W..f.W= 00:22:29.459 00000170 12 fb b3 98 45 58 fd 5f 9e ca 77 96 fe 42 0e 66 ....EX._..w..B.f 00:22:29.459 00000180 b8 83 50 b0 b7 91 bf 9c 9e 08 b6 d9 e4 a5 6e f5 ..P...........n. 00:22:29.459 00000190 2f 06 f3 17 bb ff ab 5c 1f 1c c0 a6 2d 0d d1 c1 /......\....-... 00:22:29.459 000001a0 c9 96 c1 99 9d d2 7e 50 8f 4f 41 cf 21 8f f5 76 ......~P.OA.!..v 00:22:29.459 000001b0 d4 bc cb c0 9b 7c 6a a6 dd fd a5 e8 0e 61 90 41 .....|j......a.A 00:22:29.459 000001c0 51 ad ee 5c 6d e7 5b 4e 05 cb 62 38 81 99 ef 88 Q..\m.[N..b8.... 00:22:29.459 000001d0 6a cb 30 61 4c 71 1b 68 0c 00 5a b8 54 08 25 31 j.0aLq.h..Z.T.%1 00:22:29.459 000001e0 b2 92 5a 16 fe 2e 82 d9 f2 05 dd 47 ec 33 1b 31 ..Z........G.3.1 00:22:29.459 000001f0 c3 35 b3 c6 ec 2b 7e 29 a5 de 5d 08 56 df 78 5a .5...+~)..].V.xZ 00:22:29.459 00000200 33 6b 21 b4 33 b3 55 59 58 42 5d a3 31 f1 ee 80 3k!.3.UYXB].1... 00:22:29.459 00000210 c1 12 27 39 3c d4 57 76 e3 6a ac a7 6b d6 26 6c ..'9<.Wv.j..k.&l 00:22:29.459 00000220 4b bc 73 48 2b 31 c4 4a 02 17 db 22 51 27 c3 c5 K.sH+1.J..."Q'.. 00:22:29.459 00000230 cf 86 74 73 fb d9 a9 ca b3 99 94 80 4e 80 a7 08 ..ts........N... 00:22:29.459 00000240 ca f5 03 cd 9e f1 2e 2b 4b f3 0d 60 32 70 19 7d .......+K..`2p.} 00:22:29.459 00000250 ec 35 45 c7 53 ae 02 99 2e a8 c2 e8 6a d3 0c 16 .5E.S.......j... 00:22:29.459 00000260 be f9 aa 17 6c a0 cb 30 28 77 fc 25 73 0b 7f 21 ....l..0(w.%s..! 00:22:29.459 00000270 2c 19 16 bc 68 e6 13 aa ab 58 8a bf cd 60 74 97 ,...h....X...`t. 00:22:29.459 00000280 21 7e fb f2 65 b3 22 8a f7 46 72 91 56 81 67 6f !~..e."..Fr.V.go 00:22:29.459 00000290 36 8f c5 42 fc bb 82 af 44 61 88 0e 25 69 16 e1 6..B....Da..%i.. 00:22:29.459 000002a0 5a 4e 10 cf f7 80 3c 15 ff 8a a9 66 41 c4 83 d7 ZN....<....fA... 00:22:29.459 000002b0 04 19 19 0c 01 d6 ba 17 61 ab ae 48 01 4b 1c 1f ........a..H.K.. 00:22:29.459 000002c0 21 9a 0f 3f 81 dc 0f a1 de d2 fa 1c 23 6a 60 b6 !..?........#j`. 00:22:29.459 000002d0 3f 52 9a a2 d5 97 64 2c 36 6c 1f 64 eb 2f 5c ab ?R....d,6l.d./\. 00:22:29.459 000002e0 58 1d 7c 93 0a 4b 1b 26 64 eb 32 d0 6e da ff cd X.|..K.&d.2.n... 00:22:29.459 000002f0 38 7d b8 b0 a1 02 74 38 57 00 f9 a9 72 85 fb ea 8}....t8W...r... 00:22:29.459 dh secret: 00:22:29.459 00000000 cf ce 31 f8 1e bf 78 21 ea 41 76 43 37 ca fb 72 ..1...x!.AvC7..r 00:22:29.459 00000010 1d e4 1c 18 7a 61 38 c7 6b 92 e7 6c 2f eb 2d 58 ....za8.k..l/.-X 00:22:29.459 00000020 56 d2 0e ff 84 58 12 73 4c a7 99 ea 82 60 c9 0f V....X.sL....`.. 00:22:29.459 00000030 a3 f6 8d b8 d8 d8 cf 57 0b 7a 1f 44 d6 ab a3 2e .......W.z.D.... 00:22:29.459 00000040 3f e9 45 98 18 a0 cc 7a 9c 45 02 ca 94 11 01 37 ?.E....z.E.....7 00:22:29.459 00000050 3e 8c ce 92 98 17 be 0d 46 7b 65 3c f3 f7 43 f9 >.......F{e<..C. 00:22:29.459 00000060 06 e7 c9 27 0d 28 ca 45 0f 70 0d 35 a1 80 36 8c ...'.(.E.p.5..6. 00:22:29.459 00000070 b7 da 99 ed 79 e5 36 2c 16 c4 fa 0e 07 c1 8c 3b ....y.6,.......; 00:22:29.459 00000080 df c4 84 83 4f e1 66 d5 b7 87 06 e9 21 5c 42 98 ....O.f.....!\B. 00:22:29.459 00000090 d8 17 ca b4 e5 b5 09 aa 89 20 3e 3f ea ef 4f f4 ......... >?..O. 00:22:29.459 000000a0 bb 1a 4b 56 9d 9a 9b 6e 8a 1f 29 da 2d 65 9b 6b ..KV...n..).-e.k 00:22:29.459 000000b0 4b d7 fa 3b 87 4a b5 d0 15 c6 90 03 cb 82 76 34 K..;.J........v4 00:22:29.459 000000c0 09 3f 3e 6e 45 66 09 78 e1 58 99 83 a7 98 9d 38 .?>nEf.x.X.....8 00:22:29.459 000000d0 e8 74 9e d3 44 58 97 ba c5 f5 9b f2 db e2 91 43 .t..DX.........C 00:22:29.459 000000e0 f0 64 2e 6f 26 03 7b d4 cd ee 59 c3 e2 b1 36 c6 .d.o&.{...Y...6. 00:22:29.459 000000f0 ee 90 5d 46 3b 71 1a 11 cd 07 dc 91 23 1c 3a 39 ..]F;q......#.:9 00:22:29.459 00000100 21 6e 8b 05 06 1d c3 cc 4a 11 05 d2 f3 39 f5 0b !n......J....9.. 00:22:29.459 00000110 fd e9 10 c0 aa ef 35 80 d6 65 a5 d5 cd 60 3c ef ......5..e...`<. 00:22:29.459 00000120 96 76 e2 4f 1d cb d0 60 c3 6f bf 9b 34 f6 7e 54 .v.O...`.o..4.~T 00:22:29.459 00000130 ef 41 47 5c ab b6 66 0e ac c6 7d f1 39 3e 2c 69 .AG\..f...}.9>,i 00:22:29.459 00000140 33 68 32 3b 80 a9 58 2c 14 46 48 7c 0e 19 fb b9 3h2;..X,.FH|.... 00:22:29.459 00000150 56 6a 48 f3 ee 24 b8 78 fd aa 9c c6 78 99 4a b8 VjH..$.x....x.J. 00:22:29.459 00000160 00 5d d2 2d d1 b2 58 ee 0c 7c bc 43 3d a0 d7 cc .].-..X..|.C=... 00:22:29.459 00000170 2a 16 fd 4f aa 16 d9 ae 58 cc 98 7c 2e eb 3b 05 *..O....X..|..;. 00:22:29.459 00000180 0e 11 6d 8c 16 a8 ce 5a 76 b7 ca fb f1 52 ab 99 ..m....Zv....R.. 00:22:29.459 00000190 28 6b 54 e7 3e 0b e9 ae 50 58 03 b1 c4 aa 36 82 (kT.>...PX....6. 00:22:29.459 000001a0 25 f0 86 73 c0 3d b8 13 24 2a 21 e7 cd 26 5a 11 %..s.=..$*!..&Z. 00:22:29.459 000001b0 94 c0 38 93 08 cf 42 90 c7 d2 a9 17 9e 8e b5 a3 ..8...B......... 00:22:29.459 000001c0 b2 22 c0 a8 e4 1f 29 54 27 9c d6 24 3a be 95 0e ."....)T'..$:... 00:22:29.459 000001d0 0b e8 44 f8 83 dd c2 6a 42 22 09 06 3f c0 b4 49 ..D....jB"..?..I 00:22:29.459 000001e0 e9 e2 eb 54 c8 ee 0b 20 d7 c5 fe 32 8c 82 a8 c4 ...T... ...2.... 00:22:29.459 000001f0 11 82 7f 71 35 26 2b cf c3 d5 4d b5 f4 75 33 12 ...q5&+...M..u3. 00:22:29.459 00000200 19 b3 6e 67 4f 78 cd 1b 95 51 13 62 cd 17 88 56 ..ngOx...Q.b...V 00:22:29.459 00000210 a3 c7 2a ef ac 1e 3e 30 57 de 42 87 70 27 aa d2 ..*...>0W.B.p'.. 00:22:29.459 00000220 66 a8 7c f7 43 09 9a 80 e7 18 0d bc 18 57 49 ed f.|.C........WI. 00:22:29.459 00000230 f6 71 f3 a1 22 2d 81 be b3 b0 f6 5c a7 df b3 46 .q.."-.....\...F 00:22:29.459 00000240 1c 35 f0 f5 c8 ed e5 7d 05 10 a1 fc 8e 3b 64 b3 .5.....}.....;d. 00:22:29.459 00000250 1a 7a de b1 15 f1 fe 68 28 96 0b 44 e2 7a b9 26 .z.....h(..D.z.& 00:22:29.459 00000260 63 86 c1 c7 66 37 ca 71 e5 0a fa 08 97 9d d8 10 c...f7.q........ 00:22:29.459 00000270 82 00 e5 85 98 ce fb 11 46 9c c6 c7 2e cf be 0c ........F....... 00:22:29.459 00000280 c8 b6 be 57 cf 00 ce 10 85 76 32 1d 14 5a a6 dd ...W.....v2..Z.. 00:22:29.459 00000290 07 24 f3 35 7d 14 56 bf ed 73 d1 d4 c1 f5 ad 67 .$.5}.V..s.....g 00:22:29.459 000002a0 12 c6 e0 78 fe 89 b4 06 9e 2b 07 28 0e 88 23 fc ...x.....+.(..#. 00:22:29.459 000002b0 e7 b8 a6 b6 aa 0c 78 ed 93 96 e4 5d 69 b7 eb 65 ......x....]i..e 00:22:29.459 000002c0 37 c8 70 34 41 28 d7 6f 57 90 bd 09 68 be fc 3d 7.p4A(.oW...h..= 00:22:29.459 000002d0 f7 98 fc d4 17 ef 34 6b 03 97 22 1b d3 02 e5 02 ......4k.."..... 00:22:29.459 000002e0 01 17 f3 f4 b2 ca 80 b9 ed f4 73 38 e9 2a cf c5 ..........s8.*.. 00:22:29.459 000002f0 de 0f d8 e4 a1 21 4a fe 9d 5f b0 ab d4 fa 8e 70 .....!J.._.....p 00:22:29.459 [2024-09-27 13:27:07.190267] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key1, hash=1, dhgroup=4, seq=3775755206, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.459 [2024-09-27 13:27:07.190618] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.459 [2024-09-27 13:27:07.242627] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.459 [2024-09-27 13:27:07.243211] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.459 [2024-09-27 13:27:07.243407] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.459 [2024-09-27 13:27:07.243713] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.459 [2024-09-27 13:27:07.376388] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.459 [2024-09-27 13:27:07.376755] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.459 [2024-09-27 13:27:07.377014] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.459 [2024-09-27 13:27:07.377130] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.459 [2024-09-27 13:27:07.377423] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.459 ctrlr pubkey: 00:22:29.460 00000000 96 c5 96 6e b1 39 5d 9e 6f c1 14 fa d2 34 41 85 ...n.9].o....4A. 00:22:29.460 00000010 11 4e c3 e3 c2 13 21 5b ab ca 17 b2 57 1c c7 8f .N....![....W... 00:22:29.460 00000020 e7 84 98 34 f3 7c f2 b4 0e f8 82 91 10 3f 31 09 ...4.|.......?1. 00:22:29.460 00000030 d0 c5 2d 5b 11 81 dd 5b 6b 46 03 ae 0b c6 48 e5 ..-[...[kF....H. 00:22:29.460 00000040 8b 50 a5 7c 78 a2 ad fd ef 94 a9 97 70 40 96 1b .P.|x.......p@.. 00:22:29.460 00000050 f5 c9 25 1a 72 5e 7c 49 ec 60 6d 91 61 03 86 94 ..%.r^|I.`m.a... 00:22:29.460 00000060 d5 a0 95 a8 8a db 53 ef ed 90 7f 3b ec be 4a 13 ......S....;..J. 00:22:29.460 00000070 e2 44 38 e9 2d ad c0 b9 ab 37 17 c5 af c1 24 37 .D8.-....7....$7 00:22:29.460 00000080 df 58 91 b4 02 b5 bd fa d3 ef 66 e9 41 62 43 bf .X........f.AbC. 00:22:29.460 00000090 4b 2e 31 21 26 69 3a 58 0a 39 4e b6 bc fa 0a 09 K.1!&i:X.9N..... 00:22:29.460 000000a0 ad 8a e6 78 1f 6e 57 fb a9 97 7b e2 db c1 74 a9 ...x.nW...{...t. 00:22:29.460 000000b0 71 7e f3 a0 ca 25 32 9f 30 76 0a 4a d2 24 e6 b1 q~...%2.0v.J.$.. 00:22:29.460 000000c0 e0 ac de 64 80 be 49 5d e2 54 44 0e 9e 22 ff 80 ...d..I].TD..".. 00:22:29.460 000000d0 9d 70 45 73 b6 48 15 21 a6 e2 0e 77 84 71 e4 9b .pEs.H.!...w.q.. 00:22:29.460 000000e0 3d 04 74 2f 59 52 86 c2 6e dc 56 a4 23 ba 07 a0 =.t/YR..n.V.#... 00:22:29.460 000000f0 0b c8 11 b1 6d 83 27 14 55 02 4d 1d 8b 19 4d e5 ....m.'.U.M...M. 00:22:29.460 00000100 3b 61 d7 a3 61 5a 8f ad 67 85 fe f4 e3 92 96 ae ;a..aZ..g....... 00:22:29.460 00000110 a4 25 0f ae 09 e2 20 dd 7c 88 5f 67 2b d3 c2 d0 .%.... .|._g+... 00:22:29.460 00000120 21 f7 31 af e6 a1 ae 66 ec 29 0e 8b b0 d1 8f 17 !.1....f.)...... 00:22:29.460 00000130 b7 31 98 a5 ed d8 75 dc db 54 61 c1 6e 07 c7 2f .1....u..Ta.n../ 00:22:29.460 00000140 53 14 09 9d ed fc da ff 3c 2e 20 8f aa cf b3 0f S.......<. ..... 00:22:29.460 00000150 2e 08 5f 72 92 91 e4 58 63 83 94 7a 8b 10 ca 13 .._r...Xc..z.... 00:22:29.460 00000160 7a 06 54 60 59 5c 22 db bc 50 17 7e fc cd 8e 42 z.T`Y\"..P.~...B 00:22:29.460 00000170 eb e2 f0 71 35 86 e2 0e a2 2f ab 20 cf d6 b1 0e ...q5..../. .... 00:22:29.460 00000180 d5 e6 fa 59 0d 8e 91 22 b7 a3 cd 11 3e b3 39 d4 ...Y..."....>.9. 00:22:29.460 00000190 a5 c6 d2 d4 28 23 ff 95 20 0d ed 86 f4 ba 3e 2c ....(#.. .....>, 00:22:29.460 000001a0 0c 69 28 fe 12 f3 d9 03 f1 de 4e 5d 02 35 41 b7 .i(.......N].5A. 00:22:29.460 000001b0 3c bc 4d 2e 09 bc b9 a5 9c 17 e9 0d 1b 9f f6 46 <.M............F 00:22:29.460 000001c0 9d 9d 2a f9 56 be 5c 44 a4 16 c8 f0 a3 c8 e2 22 ..*.V.\D......." 00:22:29.460 000001d0 4d b3 ef 83 46 cf bb 25 5e 37 0a 7f 94 68 58 e4 M...F..%^7...hX. 00:22:29.460 000001e0 e2 eb ff 43 8f b9 2e 14 c7 0b 99 5f 8c fb d3 d6 ...C......._.... 00:22:29.460 000001f0 93 73 bd fc ae 12 d1 9a 9d 49 08 49 bc cf 3b 85 .s.......I.I..;. 00:22:29.460 00000200 63 84 c6 7b eb 78 e2 c9 70 ec 33 d8 7e 77 a5 95 c..{.x..p.3.~w.. 00:22:29.460 00000210 b8 52 e4 9b b9 ad 15 bc 2b 4b d2 bc 66 63 15 9b .R......+K..fc.. 00:22:29.460 00000220 7b 50 d8 9c 18 14 f2 70 52 8d 46 9c 2d 7b 5a e1 {P.....pR.F.-{Z. 00:22:29.460 00000230 b2 bb 58 45 51 1c 5e da 34 dc 25 af 50 b7 28 d6 ..XEQ.^.4.%.P.(. 00:22:29.460 00000240 3d 73 2a 8b 7f 7c f9 63 1d 49 10 0a d7 60 f4 dc =s*..|.c.I...`.. 00:22:29.460 00000250 6d 9e 72 c2 0a 82 e1 94 95 79 36 a5 bd 18 17 63 m.r......y6....c 00:22:29.460 00000260 85 4e 0d 73 19 6f e5 79 67 d1 f6 27 19 73 94 e0 .N.s.o.yg..'.s.. 00:22:29.460 00000270 a7 1f 1a ba c7 da 73 e3 ca 70 24 52 c7 cd 63 ce ......s..p$R..c. 00:22:29.460 00000280 bb 02 ef 8c c5 01 d6 cb 13 35 09 76 d6 79 36 25 .........5.v.y6% 00:22:29.460 00000290 a1 a4 91 7b 6d d4 f3 9f 75 b3 72 48 56 c0 e4 0a ...{m...u.rHV... 00:22:29.460 000002a0 59 31 17 b2 9f a2 4d 96 ec 52 ae 6f a1 88 ae a6 Y1....M..R.o.... 00:22:29.460 000002b0 03 3c 6b b2 9a 6c ad ab 3c ed b3 95 15 69 bf 5d .k..)..z..Y 00:22:29.460 000000f0 f3 fa 1c d6 f9 86 62 0e 7c d3 14 78 d7 66 f9 7a ......b.|..x.f.z 00:22:29.460 00000100 61 6c c7 38 7b ae b8 c4 00 48 38 c7 6a 36 b5 35 al.8{....H8.j6.5 00:22:29.460 00000110 f3 76 bd df 3c fd 56 9d 3c bc 46 e7 4e 7d de fd .v..<.V.<.F.N}.. 00:22:29.460 00000120 87 ee 5a 36 e8 23 42 6e 40 b6 b5 25 88 13 bb 7f ..Z6.#Bn@..%.... 00:22:29.460 00000130 9e 2c 3c 53 19 bb 83 6e f9 9b 2d d0 98 93 55 9a .,.[.1.+...d 00:22:29.460 000001d0 a5 69 66 2e 50 b6 bf a7 3f 64 7a cc 9f b7 5d 1a .if.P...?dz...]. 00:22:29.460 000001e0 ca 80 29 ff eb 00 19 e9 4a e3 34 b5 21 d8 27 3f ..).....J.4.!.'? 00:22:29.460 000001f0 5b fb 58 fa 49 ae 78 8e 94 a0 c6 7b ad 4a 77 ad [.X.I.x....{.Jw. 00:22:29.460 00000200 9f fa 3c 55 82 e5 30 0c d2 50 b9 11 dd ba 86 25 ..ox7.p.W.. 00:22:29.460 00000250 98 b9 05 c2 61 73 fa 4e 89 4e 1b 3f d3 3b d8 df ....as.N.N.?.;.. 00:22:29.460 00000260 a3 af fd ae 3e 37 3d 38 df 36 2d 4a 2a 07 fe c2 ....>7=8.6-J*... 00:22:29.460 00000270 fb f9 83 04 04 20 b5 e8 65 03 f9 b2 1a d6 0e d3 ..... ..e....... 00:22:29.460 00000280 30 b3 c4 cb c2 75 e0 3d 93 43 dd a0 e2 64 54 29 0....u.=.C...dT) 00:22:29.460 00000290 c7 da 41 f2 25 80 f0 81 09 10 75 6f 2f e9 75 47 ..A.%.....uo/.uG 00:22:29.460 000002a0 2d cf 2f e9 46 ff 3d c2 4a 54 57 db 1e 22 59 65 -./.F.=.JTW.."Ye 00:22:29.460 000002b0 7e 26 2d 73 35 15 95 22 a2 1f 77 34 f8 62 67 7f ~&-s5.."..w4.bg. 00:22:29.460 000002c0 c3 c0 a8 9a 21 22 7e 50 68 84 78 c2 15 99 0e f8 ....!"~Ph.x..... 00:22:29.460 000002d0 f0 7a b5 e4 aa 0f e5 55 47 2a 66 d3 48 e3 84 d7 .z.....UG*f.H... 00:22:29.460 000002e0 d2 f0 38 13 f9 5c 3e ce 35 37 78 c6 1a 6c 9b 86 ..8..\>.57x..l.. 00:22:29.460 000002f0 bd 2b 68 6b 90 29 d2 92 c7 40 dc d9 29 cd 19 c9 .+hk.)...@..)... 00:22:29.460 dh secret: 00:22:29.460 00000000 28 68 4d fb b3 4f 0b 33 e6 23 52 6c 03 e5 35 32 (hM..O.3.#Rl..52 00:22:29.460 00000010 36 51 6d 0e b3 34 ec cb b6 4a fb e3 b1 d0 4e dd 6Qm..4...J....N. 00:22:29.460 00000020 65 38 16 1c f9 ec f8 99 b4 4c 3b 90 81 21 63 d2 e8.......L;..!c. 00:22:29.460 00000030 d5 01 b3 0f be 08 c9 67 99 4d a5 c4 95 b8 be 39 .......g.M.....9 00:22:29.460 00000040 f7 ef 01 e2 56 09 05 2e fe 6f 7e 70 85 02 fa 04 ....V....o~p.... 00:22:29.460 00000050 8d ac 9e a4 7c 2f 8c 7c b9 9c 06 36 3b 36 19 76 ....|/.|...6;6.v 00:22:29.460 00000060 69 24 43 16 42 54 a0 e7 7f 56 8d 0d 47 93 20 ee i$C.BT...V..G. . 00:22:29.460 00000070 3b f3 48 5a 0a ee ed a2 cf c4 a0 1e a0 70 57 ec ;.HZ.........pW. 00:22:29.460 00000080 dc 2b 34 3d f8 da d6 9d e3 f1 fa 99 2b f8 fc d5 .+4=........+... 00:22:29.460 00000090 77 a7 0e fa 4c bd 53 95 b6 0d ac c5 ea e8 bf 39 w...L.S........9 00:22:29.460 000000a0 b7 6d 48 31 9d 60 b8 33 fe b4 76 5c 6d 53 98 06 .mH1.`.3..v\mS.. 00:22:29.460 000000b0 da d8 31 7a 70 57 1c 36 f9 39 eb b8 dc c1 3b 3c ..1zpW.6.9....;< 00:22:29.460 000000c0 f7 44 3c 55 ff 76 45 08 e8 22 69 d4 f1 c2 f9 f4 .Dm.X>w 00:22:29.461 00000180 73 d9 7e 91 36 7d b3 70 10 b2 6e ac a4 dc 3d 2f s.~.6}.p..n...=/ 00:22:29.461 00000190 c8 19 4a 6f 7b 4b cc 67 e4 58 b4 77 81 5f d8 0b ..Jo{K.g.X.w._.. 00:22:29.461 000001a0 25 31 cd 9d a5 69 26 51 fe 27 8f 10 75 5a aa 9c %1...i&Q.'..uZ.. 00:22:29.461 000001b0 39 b8 c8 27 e1 a1 ac 5a 04 29 6f 61 62 24 e5 c9 9..'...Z.)oab$.. 00:22:29.461 000001c0 22 af 03 78 1b d3 43 8d 24 97 2c 11 9b d9 f9 5f "..x..C.$.,...._ 00:22:29.461 000001d0 f2 fe ca 26 c8 87 a5 69 99 cd 2d d8 e1 e2 61 8b ...&...i..-...a. 00:22:29.461 000001e0 91 92 1a fa 78 b7 1d 31 d7 9e dd e0 b4 50 9c be ....x..1.....P.. 00:22:29.461 000001f0 b3 6e 86 b6 4c c9 8e 17 01 f0 de 33 e9 08 67 19 .n..L......3..g. 00:22:29.461 00000200 98 5a ef 62 9c e2 3c 5e 43 a8 a6 de fd 53 67 d1 .Z.b..<^C....Sg. 00:22:29.461 00000210 48 ad 34 c9 59 24 ff 49 ef bf 1b be ba ce 6b 6d H.4.Y$.I......km 00:22:29.461 00000220 d2 1b 74 a0 e8 b6 c7 17 db c7 3d f1 e1 aa 09 a7 ..t.......=..... 00:22:29.461 00000230 7a 8c 45 57 52 19 97 64 c0 f1 d2 8c a0 ae b0 82 z.EWR..d........ 00:22:29.461 00000240 03 85 9b 9d 27 3c e0 43 0d 9c 9d c8 f4 38 b6 cd ....'<.C.....8.. 00:22:29.461 00000250 97 a0 60 f0 79 74 96 51 6b 24 b7 8d ab 32 e4 12 ..`.yt.Qk$...2.. 00:22:29.461 00000260 8e 35 14 a4 57 52 e5 7b f3 56 9f b0 9a e5 90 0e .5..WR.{.V...... 00:22:29.461 00000270 a0 f1 b8 a7 9a 3f 8a 39 dc a4 63 48 1f 77 f6 8f .....?.9..cH.w.. 00:22:29.461 00000280 49 f8 f3 a7 a7 5e c2 4f 0c 69 7d 71 91 c0 04 36 I....^.O.i}q...6 00:22:29.461 00000290 f1 23 5b 59 4f b1 97 b9 2c 3d 88 86 ee c8 54 8d .#[YO...,=....T. 00:22:29.461 000002a0 05 f7 ef fc 25 b1 09 08 9c cc 6e 25 f6 cc 29 56 ....%.....n%..)V 00:22:29.461 000002b0 44 65 4d 19 47 f5 7e df d0 f8 ff be ec fc 28 0d DeM.G.~.......(. 00:22:29.461 000002c0 00 68 9b 9a 5b ce c7 11 c4 4c ae 3e 0f c7 3f 67 .h..[....L.>..?g 00:22:29.461 000002d0 fe 43 17 f8 cb e2 16 0d 2c 9b 4a ca d8 8e 42 98 .C......,.J...B. 00:22:29.461 000002e0 eb 1c 03 c3 e1 2b c7 a9 61 2d c8 45 be 5c 4f af .....+..a-.E.\O. 00:22:29.461 000002f0 cf 7a 52 06 54 10 d9 28 bf 90 db 83 2f 87 71 5c .zR.T..(..../.q\ 00:22:29.461 [2024-09-27 13:27:07.446638] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=1, dhgroup=4, seq=3775755207, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.461 [2024-09-27 13:27:07.447007] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.461 [2024-09-27 13:27:07.495799] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.461 [2024-09-27 13:27:07.496238] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.461 [2024-09-27 13:27:07.496522] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.461 [2024-09-27 13:27:07.496943] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.461 [2024-09-27 13:27:07.548831] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.461 [2024-09-27 13:27:07.549214] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.461 [2024-09-27 13:27:07.549622] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.461 [2024-09-27 13:27:07.550012] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.461 [2024-09-27 13:27:07.550296] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.461 ctrlr pubkey: 00:22:29.461 00000000 96 c5 96 6e b1 39 5d 9e 6f c1 14 fa d2 34 41 85 ...n.9].o....4A. 00:22:29.461 00000010 11 4e c3 e3 c2 13 21 5b ab ca 17 b2 57 1c c7 8f .N....![....W... 00:22:29.461 00000020 e7 84 98 34 f3 7c f2 b4 0e f8 82 91 10 3f 31 09 ...4.|.......?1. 00:22:29.461 00000030 d0 c5 2d 5b 11 81 dd 5b 6b 46 03 ae 0b c6 48 e5 ..-[...[kF....H. 00:22:29.461 00000040 8b 50 a5 7c 78 a2 ad fd ef 94 a9 97 70 40 96 1b .P.|x.......p@.. 00:22:29.461 00000050 f5 c9 25 1a 72 5e 7c 49 ec 60 6d 91 61 03 86 94 ..%.r^|I.`m.a... 00:22:29.461 00000060 d5 a0 95 a8 8a db 53 ef ed 90 7f 3b ec be 4a 13 ......S....;..J. 00:22:29.461 00000070 e2 44 38 e9 2d ad c0 b9 ab 37 17 c5 af c1 24 37 .D8.-....7....$7 00:22:29.461 00000080 df 58 91 b4 02 b5 bd fa d3 ef 66 e9 41 62 43 bf .X........f.AbC. 00:22:29.461 00000090 4b 2e 31 21 26 69 3a 58 0a 39 4e b6 bc fa 0a 09 K.1!&i:X.9N..... 00:22:29.461 000000a0 ad 8a e6 78 1f 6e 57 fb a9 97 7b e2 db c1 74 a9 ...x.nW...{...t. 00:22:29.461 000000b0 71 7e f3 a0 ca 25 32 9f 30 76 0a 4a d2 24 e6 b1 q~...%2.0v.J.$.. 00:22:29.461 000000c0 e0 ac de 64 80 be 49 5d e2 54 44 0e 9e 22 ff 80 ...d..I].TD..".. 00:22:29.461 000000d0 9d 70 45 73 b6 48 15 21 a6 e2 0e 77 84 71 e4 9b .pEs.H.!...w.q.. 00:22:29.461 000000e0 3d 04 74 2f 59 52 86 c2 6e dc 56 a4 23 ba 07 a0 =.t/YR..n.V.#... 00:22:29.461 000000f0 0b c8 11 b1 6d 83 27 14 55 02 4d 1d 8b 19 4d e5 ....m.'.U.M...M. 00:22:29.461 00000100 3b 61 d7 a3 61 5a 8f ad 67 85 fe f4 e3 92 96 ae ;a..aZ..g....... 00:22:29.461 00000110 a4 25 0f ae 09 e2 20 dd 7c 88 5f 67 2b d3 c2 d0 .%.... .|._g+... 00:22:29.461 00000120 21 f7 31 af e6 a1 ae 66 ec 29 0e 8b b0 d1 8f 17 !.1....f.)...... 00:22:29.461 00000130 b7 31 98 a5 ed d8 75 dc db 54 61 c1 6e 07 c7 2f .1....u..Ta.n../ 00:22:29.461 00000140 53 14 09 9d ed fc da ff 3c 2e 20 8f aa cf b3 0f S.......<. ..... 00:22:29.461 00000150 2e 08 5f 72 92 91 e4 58 63 83 94 7a 8b 10 ca 13 .._r...Xc..z.... 00:22:29.461 00000160 7a 06 54 60 59 5c 22 db bc 50 17 7e fc cd 8e 42 z.T`Y\"..P.~...B 00:22:29.461 00000170 eb e2 f0 71 35 86 e2 0e a2 2f ab 20 cf d6 b1 0e ...q5..../. .... 00:22:29.461 00000180 d5 e6 fa 59 0d 8e 91 22 b7 a3 cd 11 3e b3 39 d4 ...Y..."....>.9. 00:22:29.461 00000190 a5 c6 d2 d4 28 23 ff 95 20 0d ed 86 f4 ba 3e 2c ....(#.. .....>, 00:22:29.461 000001a0 0c 69 28 fe 12 f3 d9 03 f1 de 4e 5d 02 35 41 b7 .i(.......N].5A. 00:22:29.461 000001b0 3c bc 4d 2e 09 bc b9 a5 9c 17 e9 0d 1b 9f f6 46 <.M............F 00:22:29.461 000001c0 9d 9d 2a f9 56 be 5c 44 a4 16 c8 f0 a3 c8 e2 22 ..*.V.\D......." 00:22:29.461 000001d0 4d b3 ef 83 46 cf bb 25 5e 37 0a 7f 94 68 58 e4 M...F..%^7...hX. 00:22:29.461 000001e0 e2 eb ff 43 8f b9 2e 14 c7 0b 99 5f 8c fb d3 d6 ...C......._.... 00:22:29.461 000001f0 93 73 bd fc ae 12 d1 9a 9d 49 08 49 bc cf 3b 85 .s.......I.I..;. 00:22:29.461 00000200 63 84 c6 7b eb 78 e2 c9 70 ec 33 d8 7e 77 a5 95 c..{.x..p.3.~w.. 00:22:29.461 00000210 b8 52 e4 9b b9 ad 15 bc 2b 4b d2 bc 66 63 15 9b .R......+K..fc.. 00:22:29.461 00000220 7b 50 d8 9c 18 14 f2 70 52 8d 46 9c 2d 7b 5a e1 {P.....pR.F.-{Z. 00:22:29.461 00000230 b2 bb 58 45 51 1c 5e da 34 dc 25 af 50 b7 28 d6 ..XEQ.^.4.%.P.(. 00:22:29.461 00000240 3d 73 2a 8b 7f 7c f9 63 1d 49 10 0a d7 60 f4 dc =s*..|.c.I...`.. 00:22:29.461 00000250 6d 9e 72 c2 0a 82 e1 94 95 79 36 a5 bd 18 17 63 m.r......y6....c 00:22:29.461 00000260 85 4e 0d 73 19 6f e5 79 67 d1 f6 27 19 73 94 e0 .N.s.o.yg..'.s.. 00:22:29.461 00000270 a7 1f 1a ba c7 da 73 e3 ca 70 24 52 c7 cd 63 ce ......s..p$R..c. 00:22:29.461 00000280 bb 02 ef 8c c5 01 d6 cb 13 35 09 76 d6 79 36 25 .........5.v.y6% 00:22:29.461 00000290 a1 a4 91 7b 6d d4 f3 9f 75 b3 72 48 56 c0 e4 0a ...{m...u.rHV... 00:22:29.461 000002a0 59 31 17 b2 9f a2 4d 96 ec 52 ae 6f a1 88 ae a6 Y1....M..R.o.... 00:22:29.461 000002b0 03 3c 6b b2 9a 6c ad ab 3c ed b3 95 15 69 bf 5d ...5%.... 00:22:29.461 00000070 48 a8 1f 51 f5 ea c8 16 0d bb 96 c6 b8 57 66 99 H..Q.........Wf. 00:22:29.461 00000080 2c 42 8e bc 28 ad 29 82 dd b2 1d 48 c9 18 2c 42 ,B..(.)....H..,B 00:22:29.461 00000090 b5 9e 77 c4 bc 16 cc f6 98 28 a0 a3 1b 15 dc 26 ..w......(.....& 00:22:29.461 000000a0 f2 f3 36 9e b2 3f d7 2a 0c 98 b6 cb 41 f8 da cc ..6..?.*....A... 00:22:29.461 000000b0 cc 3b 65 21 47 24 2c a2 8e 74 66 67 fc c3 4d 66 .;e!G$,..tfg..Mf 00:22:29.461 000000c0 87 8d 63 6c 85 0a e4 f0 1e af 42 8d 6c b5 9f ea ..cl......B.l... 00:22:29.461 000000d0 94 7d 0c 65 3d 6f 20 e2 54 64 2d 43 ba a0 46 97 .}.e=o .Td-C..F. 00:22:29.461 000000e0 43 90 e9 93 2f 60 07 54 9b 78 e0 66 34 fe 86 a3 C.../`.T.x.f4... 00:22:29.461 000000f0 b2 3b 68 53 d5 6a 45 dc d4 d3 b4 5e 44 2f 73 d6 .;hS.jE....^D/s. 00:22:29.461 00000100 39 0f 8e 48 a2 2d 34 69 62 ec 2a 3a 7f d3 4d 6b 9..H.-4ib.*:..Mk 00:22:29.461 00000110 12 4d 27 76 42 46 b4 61 da d5 1c eb f9 c0 04 c4 .M'vBF.a........ 00:22:29.461 00000120 82 cf 73 0d f3 f2 c8 7e c1 5e a5 dc cf 26 9c ec ..s....~.^...&.. 00:22:29.461 00000130 10 27 4c d9 6b 78 ff e2 2e f9 bd a6 9d 97 2a 81 .'L.kx........*. 00:22:29.461 00000140 ca b3 86 b9 7f aa 8d f6 3f d5 29 d3 03 9c f2 15 ........?.)..... 00:22:29.462 00000150 16 db d9 89 39 cd 93 05 eb a2 8c 98 04 71 4e 0a ....9........qN. 00:22:29.462 00000160 cd da c4 56 ee 63 8f 21 1d 20 3b 67 4b 27 fc 61 ...V.c.!. ;gK'.a 00:22:29.462 00000170 c0 69 0d 62 01 2f 35 d3 f1 78 78 61 68 7a 4d e8 .i.b./5..xxahzM. 00:22:29.462 00000180 4e c5 39 ae 1b ae a0 1f 83 30 d6 d7 6a d8 9f 58 N.9......0..j..X 00:22:29.462 00000190 d1 f9 81 24 7f d1 fb ae 2e 9e bb 32 34 40 4e 75 ...$.......24@Nu 00:22:29.462 000001a0 f4 91 d2 27 17 b6 db 18 fe 4d 83 3d e4 11 23 a2 ...'.....M.=..#. 00:22:29.462 000001b0 86 2f 28 bf 11 b2 4d d3 71 1f 45 f7 6a 94 0e 66 ./(...M.q.E.j..f 00:22:29.462 000001c0 c6 6e 43 71 ee ef 01 76 69 85 df 69 71 8b f6 5b .nCq...vi..iq..[ 00:22:29.462 000001d0 9b a2 c2 6f ed ee 9e 3e 11 ec a5 78 37 35 57 9f ...o...>...x75W. 00:22:29.462 000001e0 39 d3 bc 8f 57 9b 27 bb d2 d7 85 68 ee 69 b5 89 9...W.'....h.i.. 00:22:29.462 000001f0 83 07 d8 52 6f b1 51 7a f3 1c d2 cc 70 40 9c c7 ...Ro.Qz....p@.. 00:22:29.462 00000200 1a c6 ed f2 30 51 ef 47 bb 60 92 00 fe 29 f8 f6 ....0Q.G.`...).. 00:22:29.462 00000210 09 bc 3e a4 f9 54 46 fc ed 3e 13 96 4c cd 8c 37 ..>..TF..>..L..7 00:22:29.462 00000220 91 5a 85 f2 03 59 bf cd 9b b4 99 fc f7 c2 6b 99 .Z...Y........k. 00:22:29.462 00000230 be 29 b9 70 81 1c e6 d7 bc dd 8b db 02 ee 20 31 .).p.......... 1 00:22:29.462 00000240 e4 c8 0b 44 19 48 70 ce cc 92 78 1c 18 06 b9 7b ...D.Hp...x....{ 00:22:29.462 00000250 1c c9 90 81 bd 05 c8 7d 39 40 a1 82 79 58 39 b2 .......}9@..yX9. 00:22:29.462 00000260 a2 1a 9a 68 d9 58 f6 4a 11 4a 7f 38 5a b4 cf e0 ...h.X.J.J.8Z... 00:22:29.462 00000270 9b a2 ef 69 c1 15 69 ce be dd d4 10 f3 80 d3 0a ...i..i......... 00:22:29.462 00000280 2b fd e0 a2 96 14 6d a7 eb e3 fe 6f 27 c3 62 c8 +.....m....o'.b. 00:22:29.462 00000290 7f c2 e5 6e 4a 40 b2 77 4d 2c 19 67 92 3d 07 40 ...nJ@.wM,.g.=.@ 00:22:29.462 000002a0 0f 21 5b 6d e5 7d 4a 8e 49 e7 34 1f 1d 87 ca 74 .![m.}J.I.4....t 00:22:29.462 000002b0 45 c5 8e bf e6 0a 2f da 1f 2b ac ed f2 58 ad 36 E...../..+...X.6 00:22:29.462 000002c0 cf c9 1d 7b 7c da 0e b1 7f a3 ea 9e ae 13 9d 5a ...{|..........Z 00:22:29.462 000002d0 80 88 0a 33 d8 24 26 05 db 5c 21 b3 d5 b0 3b 49 ...3.$&..\!...;I 00:22:29.462 000002e0 e4 11 ea c5 08 f5 ca 3d 40 a0 d5 e2 72 cf c9 b5 .......=@...r... 00:22:29.462 000002f0 dd b1 69 8e 87 ea 62 95 e8 3a 7b a1 72 83 49 f5 ..i...b..:{.r.I. 00:22:29.462 dh secret: 00:22:29.462 00000000 4b fc 60 3e 61 4d 24 17 46 cb 52 65 10 c5 bb cd K.`>aM$.F.Re.... 00:22:29.462 00000010 c8 f9 e4 b9 cc 61 d5 a8 2d 42 5c 3b e4 31 46 73 .....a..-B\;.1Fs 00:22:29.462 00000020 ed c0 88 e2 40 47 ec 74 f2 fa 90 9f f1 cc 39 66 ....@G.t......9f 00:22:29.462 00000030 a1 8b 21 24 32 28 15 dc 65 c9 b7 2d 81 57 7f 9e ..!$2(..e..-.W.. 00:22:29.462 00000040 74 fe d1 53 bb e9 2c 01 00 ef 33 23 74 4d 36 1e t..S..,...3#tM6. 00:22:29.462 00000050 61 27 de 69 d0 29 e5 49 bb cf 47 3d cf a3 81 11 a'.i.).I..G=.... 00:22:29.462 00000060 f7 a1 ba 3e c7 e9 1c 36 a5 d3 90 80 dd 09 a9 ec ...>...6........ 00:22:29.462 00000070 b1 fe 90 cb ab 47 e8 05 20 fc 6e a0 fc 72 c2 b3 .....G.. .n..r.. 00:22:29.462 00000080 6e 00 62 b1 c7 96 14 3a 92 5f 7c ad 8f 2d a2 91 n.b....:._|..-.. 00:22:29.462 00000090 82 a3 0e c8 63 6d 05 f3 ec 50 a0 50 95 05 08 92 ....cm...P.P.... 00:22:29.462 000000a0 81 df fd fc dd f3 b2 39 c8 ab 18 00 86 fa e9 2b .......9.......+ 00:22:29.462 000000b0 99 f9 10 26 f2 c0 4a ff 35 ec 28 51 72 71 05 80 ...&..J.5.(Qrq.. 00:22:29.462 000000c0 f1 55 4e 4d d9 39 23 43 b0 eb ef 37 77 5d 31 fc .UNM.9#C...7w]1. 00:22:29.462 000000d0 57 95 af b1 b5 ae 9b 38 17 94 90 04 57 72 92 e5 W......8....Wr.. 00:22:29.462 000000e0 fb 6c 5c 0e bb 06 cf 8b 93 7a d5 3c 80 54 42 95 .l\......z.<.TB. 00:22:29.462 000000f0 ee ad 64 9c 56 65 9c 7a 2d 09 34 c5 e8 7a 71 81 ..d.Ve.z-.4..zq. 00:22:29.462 00000100 47 e3 32 5a 6f 55 bb df a3 ef 1d 27 3a 15 44 97 G.2ZoU.....':.D. 00:22:29.462 00000110 7c 76 bf df ec 26 57 b1 30 d8 b2 af 41 85 bd 7d |v...&W.0...A..} 00:22:29.462 00000120 02 10 63 24 0a da c5 86 d1 5b a9 a6 a6 c4 a2 0a ..c$.....[...... 00:22:29.462 00000130 3d 25 a5 eb 07 34 51 b3 a5 98 e3 a3 6c f9 dc 97 =%...4Q.....l... 00:22:29.462 00000140 e8 99 4c cf 95 47 da 7d 38 8c c1 35 8c ba 75 e9 ..L..G.}8..5..u. 00:22:29.462 00000150 13 b0 72 8f 50 80 c2 7f f2 16 5f 16 de 7a c6 7d ..r.P....._..z.} 00:22:29.462 00000160 ef 77 f7 ed 43 fa ee 23 98 78 da c8 a9 d9 68 5a .w..C..#.x....hZ 00:22:29.462 00000170 28 20 ff 40 06 ad a0 08 f1 76 2e f6 5f 95 fd 69 ( .@.....v.._..i 00:22:29.462 00000180 52 36 68 90 68 46 2d 40 c0 94 4f 91 e7 3e c8 e4 R6h.hF-@..O..>.. 00:22:29.462 00000190 d7 b2 31 4f fa a3 6b 9c a1 0c f6 20 db a0 74 ed ..1O..k.... ..t. 00:22:29.462 000001a0 50 f9 cc ab 12 0b d8 92 0b 32 03 4e 85 9f e8 3f P........2.N...? 00:22:29.462 000001b0 55 3f 13 13 73 e4 bf 6c 86 d2 27 f2 df a7 2c c4 U?..s..l..'...,. 00:22:29.462 000001c0 18 5e 18 b1 39 e5 b8 58 99 84 0f f6 6b 7e ef 4f .^..9..X....k~.O 00:22:29.462 000001d0 2e 97 18 53 10 3a 0c d5 41 d9 5e 3d b2 26 4b e7 ...S.:..A.^=.&K. 00:22:29.462 000001e0 81 8f ec e2 d2 4f a7 03 32 d0 70 a9 d9 1d 10 c4 .....O..2.p..... 00:22:29.462 000001f0 bb a1 fd 59 f2 f8 7b 48 4f eb 51 9d 67 c2 72 f5 ...Y..{HO.Q.g.r. 00:22:29.462 00000200 23 a1 f4 21 fe 19 cc 62 db 98 b0 59 e6 97 55 1f #..!...b...Y..U. 00:22:29.462 00000210 f2 b2 dc 38 c1 44 a3 86 ce b2 99 4b dd 2f 73 c6 ...8.D.....K./s. 00:22:29.462 00000220 73 2f d8 be bf 62 a4 1d f9 ab cb ab 63 ca c1 a4 s/...b......c... 00:22:29.462 00000230 e1 fe 5b d3 b3 44 73 7d 00 5e b7 5c 95 03 1d 46 ..[..Ds}.^.\...F 00:22:29.462 00000240 71 b2 d5 14 35 1b bb 23 90 c8 7d b1 28 12 1e 23 q...5..#..}.(..# 00:22:29.462 00000250 11 72 b5 f7 87 66 40 f2 83 12 a1 7c 9f 56 f0 d4 .r...f@....|.V.. 00:22:29.462 00000260 19 fa 4b 48 51 00 13 1e b6 99 9c 63 c4 0f 49 49 ..KHQ......c..II 00:22:29.462 00000270 d6 a5 8e 00 1a 86 16 a3 8e a7 4b 51 d5 2f 27 3b ..........KQ./'; 00:22:29.462 00000280 34 7f da 96 a4 13 16 d9 10 7f aa 45 d2 c7 f4 da 4..........E.... 00:22:29.462 00000290 2c 20 91 0d 80 03 0d 6e 79 85 ae e4 e5 e2 51 47 , .....ny.....QG 00:22:29.462 000002a0 33 19 5e 3c 0d 60 cc 45 d4 8f df e8 81 81 5d 47 3.^<.`.E......]G 00:22:29.462 000002b0 e3 8c a1 cf d2 1f c6 57 f3 21 2d eb 4b 57 ce 64 .......W.!-.KW.d 00:22:29.462 000002c0 8c d1 c3 07 6e 97 7e 80 e6 9e 77 b0 d3 99 75 3e ....n.~...w...u> 00:22:29.462 000002d0 15 02 5e 1e 51 17 73 5b 9c f7 a7 f2 79 74 75 e9 ..^.Q.s[....ytu. 00:22:29.462 000002e0 d4 e6 94 d5 c5 e9 da ce 17 05 5a a0 99 82 12 9c ..........Z..... 00:22:29.462 000002f0 be 23 f6 6b 23 f2 00 1c 60 f4 b1 d7 6e b2 bc 5b .#.k#...`...n..[ 00:22:29.462 [2024-09-27 13:27:07.629860] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key2, hash=1, dhgroup=4, seq=3775755208, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.462 [2024-09-27 13:27:07.630229] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.462 [2024-09-27 13:27:07.678259] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.462 [2024-09-27 13:27:07.678823] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.462 [2024-09-27 13:27:07.679251] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.462 [2024-09-27 13:27:07.679593] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.462 [2024-09-27 13:27:07.816357] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.462 [2024-09-27 13:27:07.816885] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.462 [2024-09-27 13:27:07.816994] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.462 [2024-09-27 13:27:07.817155] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.462 [2024-09-27 13:27:07.817440] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.462 ctrlr pubkey: 00:22:29.462 00000000 39 aa ae 11 c4 4c 10 86 31 f8 56 e3 da b4 5c f9 9....L..1.V...\. 00:22:29.462 00000010 d3 97 75 e2 b4 87 fc 16 93 a1 3b ad 17 e8 64 ef ..u.......;...d. 00:22:29.462 00000020 f3 39 42 ed 5a 1b 59 94 7a c8 9c 06 8b 4a f7 e3 .9B.Z.Y.z....J.. 00:22:29.462 00000030 1b f1 4d d5 81 60 49 f9 28 7d 85 8e 3f 1e 94 4d ..M..`I.(}..?..M 00:22:29.462 00000040 d1 15 89 34 b9 77 8c 06 49 01 f1 31 ef 78 af e5 ...4.w..I..1.x.. 00:22:29.462 00000050 12 23 64 8a 2b 43 23 09 56 89 df ab 5d 29 2a 36 .#d.+C#.V...])*6 00:22:29.462 00000060 ae ce 56 7b 5b d5 21 bd 1d d9 91 3d 52 99 3d 99 ..V{[.!....=R.=. 00:22:29.462 00000070 63 20 58 9d e3 97 1b 85 1e bf f8 07 93 b4 77 6c c X...........wl 00:22:29.462 00000080 da bb 95 50 0e fa 05 d6 75 2d c1 4c 89 3d fe b7 ...P....u-.L.=.. 00:22:29.462 00000090 5b ec 20 77 a1 94 6e 24 68 01 83 6d bd 97 ec ab [. w..n$h..m.... 00:22:29.462 000000a0 b0 62 39 8d 93 37 b4 f0 42 29 df 1e 7f 19 c6 45 .b9..7..B).....E 00:22:29.462 000000b0 62 75 c6 b6 94 9d 2f ce d9 40 42 68 75 63 04 21 bu..../..@Bhuc.! 00:22:29.462 000000c0 57 f8 4c ad dc 13 51 60 a9 a5 7a 0c 16 f3 a9 80 W.L...Q`..z..... 00:22:29.462 000000d0 e1 75 d4 4b f6 4b 63 38 b6 98 21 dc a9 fa e0 1f .u.K.Kc8..!..... 00:22:29.462 000000e0 3e 5b d8 f6 f5 48 c5 b6 e5 7b 2b e0 27 7b 40 58 >[...H...{+.'{@X 00:22:29.462 000000f0 cc 98 ea 22 6f 0c 81 01 ed c1 e7 b9 4d 27 c5 86 ..."o.......M'.. 00:22:29.462 00000100 aa 57 68 58 5f 52 fa c9 8f 1c f7 ab 72 b9 dc e9 .WhX_R......r... 00:22:29.462 00000110 2d 90 95 81 20 bd 6b 52 45 bc 95 8b ad 5f 70 e7 -... .kRE...._p. 00:22:29.462 00000120 95 c6 68 c0 aa 02 1f 5c 80 d2 28 fd 9f fd 49 71 ..h....\..(...Iq 00:22:29.462 00000130 2a 87 ba a9 fc 30 19 a7 fd b0 de bf 25 6d 0c 0e *....0......%m.. 00:22:29.462 00000140 d1 1e 13 b3 8b 8c 50 00 03 83 25 e5 5d 4b d7 76 ......P...%.]K.v 00:22:29.462 00000150 2d ad 09 68 a7 d2 1e 09 de 80 54 35 19 94 0f 0c -..h......T5.... 00:22:29.462 00000160 33 36 cb d8 64 67 c1 a5 e4 b3 a8 58 95 9e 0a 9c 36..dg.....X.... 00:22:29.462 00000170 ab d1 fc e1 65 2e f7 45 9e 85 5f 6b 6f d7 0e 45 ....e..E.._ko..E 00:22:29.462 00000180 e1 6c ef d9 4c 13 02 21 50 4f d6 0a ea a7 f8 24 .l..L..!PO.....$ 00:22:29.462 00000190 ff 49 eb 5c 9b ad 75 2d 2a a8 02 88 23 3d 76 0b .I.\..u-*...#=v. 00:22:29.462 000001a0 58 d9 e1 09 e3 d5 b9 4c 7f 11 cf ae 5c 36 24 34 X......L....\6$4 00:22:29.462 000001b0 2a e9 12 e7 6d b2 2c 9c 28 12 64 a1 3a af c0 d4 *...m.,.(.d.:... 00:22:29.462 000001c0 c8 0d ec 78 51 aa be bc 2d 3d a5 cf 48 d3 60 bc ...xQ...-=..H.`. 00:22:29.462 000001d0 8f 5b e0 db 89 2b e8 bf 43 95 62 08 4a e7 af 77 .[...+..C.b.J..w 00:22:29.462 000001e0 a7 b5 b3 1e 9e 77 63 1f b4 18 20 ab bf 0a 1b f0 .....wc... ..... 00:22:29.462 000001f0 55 0c 02 a9 74 e9 3f 56 46 db 5e 47 89 41 28 a5 U...t.?VF.^G.A(. 00:22:29.462 00000200 fb b8 27 ba 81 fa 88 6f bd c9 ef a0 f6 40 d8 83 ..'....o.....@.. 00:22:29.462 00000210 ba 17 33 cf 7d 61 ca d9 05 cd 58 a2 47 1e 0c 46 ..3.}a....X.G..F 00:22:29.462 00000220 ae 58 9f ac e8 47 90 8d d9 a7 89 4d f1 7e ed ac .X...G.....M.~.. 00:22:29.462 00000230 fa 34 4b d5 43 c4 5d 96 6d 96 13 14 ab 9c 86 94 .4K.C.].m....... 00:22:29.463 00000240 72 29 79 8d e9 d9 c9 a3 4e 46 22 84 4b 40 42 74 r)y.....NF".K@Bt 00:22:29.463 00000250 b8 41 4b a8 4b a4 42 49 d5 c6 3d 25 83 50 31 63 .AK.K.BI..=%.P1c 00:22:29.463 00000260 b9 db 29 06 6a 6a ef fa 00 b6 2f 39 a1 1f 8f 7a ..).jj..../9...z 00:22:29.463 00000270 af 1a 1f ce 87 0c 61 5b eb ae 64 51 c0 5c 97 74 ......a[..dQ.\.t 00:22:29.463 00000280 2c fb 34 25 77 f7 36 41 33 76 37 e5 65 d4 fc 20 ,.4%w.6A3v7.e.. 00:22:29.463 00000290 ef f3 42 42 36 a1 ac 00 f0 3b 7d c0 aa d8 84 ae ..BB6....;}..... 00:22:29.463 000002a0 3e 91 a1 f3 9f c8 00 e9 fa 3f d8 f8 8d c7 08 cf >........?...... 00:22:29.463 000002b0 24 d7 6e 58 1f 18 38 23 f5 8b c8 4b 71 b2 eb 2b $.nX..8#...Kq..+ 00:22:29.463 000002c0 7e 80 f9 1c 62 39 f0 6d 92 53 1d e0 71 ce 39 11 ~...b9.m.S..q.9. 00:22:29.463 000002d0 a8 26 19 4a ba 6c dc a9 3c 37 6d e4 c0 df 96 f2 .&.J.l..<7m..... 00:22:29.463 000002e0 d4 74 86 fb 59 05 11 76 77 f6 55 f2 05 a7 34 df .t..Y..vw.U...4. 00:22:29.463 000002f0 a0 32 e2 94 1e 57 39 fa af b4 c4 9c b1 41 05 70 .2...W9......A.p 00:22:29.463 host pubkey: 00:22:29.463 00000000 ab 8f fd e9 f4 2a 5e 8b df c2 cd 24 90 cc 52 5b .....*^....$..R[ 00:22:29.463 00000010 f1 cb 17 02 81 d8 2b ff ae 90 4c 11 8f 21 73 78 ......+...L..!sx 00:22:29.463 00000020 cc ea 26 d5 27 1e 62 59 5d 67 2a 79 0c 2c 43 ba ..&.'.bY]g*y.,C. 00:22:29.463 00000030 bb 03 97 1d af b6 8b de 23 58 3e 75 6f 90 ac a8 ........#X>uo... 00:22:29.463 00000040 b3 2d a2 48 ab d3 b6 2d 88 10 0b ae 57 cb df 70 .-.H...-....W..p 00:22:29.463 00000050 62 27 3e 65 ff f3 b9 04 93 bc 96 91 bb eb 12 db b'>e............ 00:22:29.463 00000060 5a 64 f4 f9 44 c5 ee e2 bf dd 10 e0 c1 7a f7 75 Zd..D........z.u 00:22:29.463 00000070 88 49 41 51 28 d7 fa 3a 69 b2 d7 ab cd fe a9 a7 .IAQ(..:i....... 00:22:29.463 00000080 7a 6f e8 87 d7 1f 2b 08 b4 b5 ae ca 57 f2 96 50 zo....+.....W..P 00:22:29.463 00000090 7c 8d ea 47 8c 91 6a a5 45 07 6a 56 9a 3b aa 7a |..G..j.E.jV.;.z 00:22:29.463 000000a0 ac 24 93 bc 36 23 17 01 8f 92 13 93 82 b4 57 b1 .$..6#........W. 00:22:29.463 000000b0 2d 36 0d 8a e4 58 a4 75 15 96 ef 37 99 2b 8f 69 -6...X.u...7.+.i 00:22:29.463 000000c0 5c 73 c4 38 cc a7 d7 96 85 6a 7b 02 c1 6f 7c c4 \s.8.....j{..o|. 00:22:29.463 000000d0 84 45 67 b1 3c 40 f8 8f 0e 17 54 15 44 65 7e 03 .Eg.<@....T.De~. 00:22:29.463 000000e0 22 28 79 51 8e 95 0d 1c fd 86 69 25 5f 18 9b b7 "(yQ......i%_... 00:22:29.463 000000f0 05 35 53 76 cf 40 94 12 ec bd 66 a1 26 e4 cc d9 .5Sv.@....f.&... 00:22:29.463 00000100 1e 11 f0 4b 05 30 16 66 87 b1 e6 06 f6 a1 25 03 ...K.0.f......%. 00:22:29.463 00000110 bc a7 ba 7d 60 8e 76 68 82 30 6a 6d 81 c8 68 86 ...}`.vh.0jm..h. 00:22:29.463 00000120 c8 ef fa 8f 43 6a 4f d0 d3 79 ce ef 31 86 5c 11 ....CjO..y..1.\. 00:22:29.463 00000130 95 41 5b 24 d8 b1 58 91 92 16 ee 9e e2 ff 5a d9 .A[$..X.......Z. 00:22:29.463 00000140 e0 d8 36 89 3c 57 43 2c 4f 4e 83 19 fe 47 ac d0 ..6.&w.O..|...S] 00:22:29.463 00000270 d2 94 3a fd c9 65 76 87 1f 5b f7 8d 2a ea 35 5f ..:..ev..[..*.5_ 00:22:29.463 00000280 35 75 f8 81 24 18 19 52 59 5d 32 47 49 af b0 39 5u..$..RY]2GI..9 00:22:29.463 00000290 36 b6 a3 14 e1 1c c9 51 0f 9e 1d 91 81 b2 b4 fe 6......Q........ 00:22:29.463 000002a0 a0 32 dd 21 90 b5 d8 c0 0b 13 be a6 79 96 67 9f .2.!........y.g. 00:22:29.463 000002b0 36 5b e3 ee ed f9 0b d2 05 fd cd c9 93 6e 15 48 6[...........n.H 00:22:29.463 000002c0 ca 9f 67 bc ce 73 94 22 a4 ea ca dc 0f 54 1f 81 ..g..s.".....T.. 00:22:29.463 000002d0 d6 cf 95 10 c3 ec cc 69 e7 7d c0 98 dc e4 4c eb .......i.}....L. 00:22:29.463 000002e0 fc 09 3a c6 01 c5 51 67 03 b8 ce 14 46 32 e4 4d ..:...Qg....F2.M 00:22:29.463 000002f0 eb 47 5d be 24 4c 81 d5 0d c5 4f 84 e5 11 56 6b .G].$L....O...Vk 00:22:29.463 dh secret: 00:22:29.463 00000000 5b e8 08 3f 17 b8 7f 7d 61 eb ae 65 e6 ed e7 d0 [..?...}a..e.... 00:22:29.463 00000010 3d 02 38 04 aa 24 df 7b 3b 76 0d 20 f0 e2 ff 16 =.8..$.{;v. .... 00:22:29.463 00000020 e9 45 f0 6b 78 71 d4 85 7d 24 cc 20 f0 8e ff 3a .E.kxq..}$. ...: 00:22:29.463 00000030 46 c9 f2 49 b6 9b be b7 76 51 6a ab d5 a1 5a 1d F..I....vQj...Z. 00:22:29.463 00000040 92 a2 1d 88 c6 37 dc 61 e3 af fc 09 cc 92 ad 2f .....7.a......./ 00:22:29.463 00000050 66 3f 4a 3a 82 66 9d 18 96 ca 47 18 16 2c 1d da f?J:.f....G..,.. 00:22:29.463 00000060 80 6e 35 80 9e 93 ef f9 9c e3 6b 49 46 86 a0 35 .n5.......kIF..5 00:22:29.463 00000070 68 0d e9 82 a5 91 61 03 f2 4f e1 2f b2 cb 0a 82 h.....a..O./.... 00:22:29.463 00000080 65 0f 66 d7 dd 48 31 f4 32 78 77 45 19 ee b6 99 e.f..H1.2xwE.... 00:22:29.463 00000090 1f f3 54 ad 61 84 05 96 fa 1c ee 52 37 20 f6 61 ..T.a......R7 .a 00:22:29.463 000000a0 85 4b d6 e9 ed 9c 5c e4 a8 b7 fa fc c7 94 b9 5d .K....\........] 00:22:29.463 000000b0 75 6b d2 9b 50 03 4a b2 c9 ee 44 25 72 35 87 d7 uk..P.J...D%r5.. 00:22:29.463 000000c0 7f a4 f2 ac e7 c9 44 54 48 35 3f ea 6a 7c 0f e6 ......DTH5?.j|.. 00:22:29.463 000000d0 c1 ba a4 5f 62 12 f8 91 d9 7b 2f 21 6f 7f 01 52 ..._b....{/!o..R 00:22:29.463 000000e0 44 61 0a 77 23 cd 58 ab 67 b1 8c a0 d3 f4 3b 4f Da.w#.X.g.....;O 00:22:29.463 000000f0 4b a9 15 73 e8 1c 5c 36 30 ff 95 56 95 51 84 1e K..s..\60..V.Q.. 00:22:29.463 00000100 fb 4b 33 94 44 f4 58 a4 90 14 54 5e 42 be d4 ca .K3.D.X...T^B... 00:22:29.463 00000110 ba 07 34 1b 64 ae 9a 4f 38 56 27 6d 13 5a c2 78 ..4.d..O8V'm.Z.x 00:22:29.463 00000120 f1 38 82 97 84 da c5 41 71 3e 0c 4f 5b e7 ac a6 .8.....Aq>.O[... 00:22:29.463 00000130 e8 0e a6 10 17 39 47 8c b4 62 f3 b7 b3 cf f4 5f .....9G..b....._ 00:22:29.463 00000140 a9 66 68 15 00 b8 88 24 2c f8 ba a2 0a c4 35 4a .fh....$,.....5J 00:22:29.463 00000150 13 52 54 37 31 06 27 70 49 f3 96 6b b8 13 f7 1a .RT71.'pI..k.... 00:22:29.463 00000160 fb a3 fb 8a ef d0 f2 b5 d0 41 27 bf 3c a2 62 c5 .........A'.<.b. 00:22:29.463 00000170 d6 11 f3 3f 43 7c 63 08 b5 99 4e e2 5f e8 9d d6 ...?C|c...N._... 00:22:29.463 00000180 bc 0d a6 b1 e5 54 75 8e 68 1b 19 d0 b0 6b 27 54 .....Tu.h....k'T 00:22:29.463 00000190 ef c0 6b 9e 4b cc 73 dd 6a f2 36 5c 20 72 ac c6 ..k.K.s.j.6\ r.. 00:22:29.463 000001a0 7f fe da 01 df 0c 82 5a 3f 67 da b7 f4 f3 2b d7 .......Z?g....+. 00:22:29.463 000001b0 c2 e1 3b 52 92 8e 1b ab 52 df fa d0 04 4f a8 c6 ..;R....R....O.. 00:22:29.463 000001c0 2a 1d d3 37 21 f1 64 36 8e 37 b3 9c 8b 9b b6 bb *..7!.d6.7...... 00:22:29.463 000001d0 ca bb d1 60 0d df f9 8e 1c 60 b8 6b b0 87 c1 ca ...`.....`.k.... 00:22:29.463 000001e0 c3 bd 3f 3b 56 7e df 5e 4c ae ce 76 31 ac fd 7d ..?;V~.^L..v1..} 00:22:29.463 000001f0 91 b2 9f 4f fb f5 5f 36 28 30 b0 5f 65 63 a9 a1 ...O.._6(0._ec.. 00:22:29.463 00000200 c2 70 df 1a 4a b8 a5 f2 95 7e 22 ec 26 cf d0 ad .p..J....~".&... 00:22:29.463 00000210 f3 31 92 41 6d 6c 10 15 b7 c9 43 e9 40 f8 57 a8 .1.Aml....C.@.W. 00:22:29.463 00000220 01 9b 77 15 4f cc 7f 0f 5f 4e 03 39 b7 9c 7b 99 ..w.O..._N.9..{. 00:22:29.463 00000230 23 e0 23 93 85 38 86 3a 47 d6 4e 90 12 8e ae d9 #.#..8.:G.N..... 00:22:29.463 00000240 c8 77 d1 f1 1c 4c b8 ec c4 ca b2 13 db 50 bf 9f .w...L.......P.. 00:22:29.463 00000250 9c 0a c6 90 a6 64 6d 7a fe 40 6c 13 47 c0 33 54 .....dmz.@l.G.3T 00:22:29.463 00000260 54 85 cb df 76 9f 6e 71 34 57 3f 40 a3 1a b9 c9 T...v.nq4W?@.... 00:22:29.463 00000270 79 76 0b 0b 41 0a fb f7 eb d2 51 98 6b 6b b0 92 yv..A.....Q.kk.. 00:22:29.463 00000280 fd 58 dc 45 a6 4a b8 9e e8 34 60 ee ee aa 88 f5 .X.E.J...4`..... 00:22:29.463 00000290 da 53 63 f8 48 d2 ae 6c e3 d5 da d2 bb 34 a4 6c .Sc.H..l.....4.l 00:22:29.463 000002a0 fe 48 52 68 b0 d4 a5 d1 d7 bd 91 84 c5 05 a4 d5 .HRh............ 00:22:29.463 000002b0 71 0a 06 aa a0 2b a7 5f ff 66 0a 32 ec a2 be 0d q....+._.f.2.... 00:22:29.463 000002c0 0e cd a8 55 51 35 08 88 ac c6 0e be d6 bb 38 7b ...UQ5........8{ 00:22:29.463 000002d0 a6 4e d3 b5 36 3d 1e b2 db 16 5a 46 31 cc 1e b4 .N..6=....ZF1... 00:22:29.463 000002e0 90 76 70 40 a1 a0 93 06 3a f9 2f 9f 55 2a 62 51 .vp@....:./.U*bQ 00:22:29.463 000002f0 0f a5 c1 d3 08 2e 92 e9 67 14 83 a3 65 33 4e c7 ........g...e3N. 00:22:29.463 [2024-09-27 13:27:07.892064] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key3, hash=1, dhgroup=4, seq=3775755209, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.463 [2024-09-27 13:27:07.892383] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.463 [2024-09-27 13:27:07.944874] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.463 [2024-09-27 13:27:07.945488] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.463 [2024-09-27 13:27:07.945772] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.463 [2024-09-27 13:27:07.945939] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.463 [2024-09-27 13:27:07.997553] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.463 [2024-09-27 13:27:07.997840] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.463 [2024-09-27 13:27:07.998059] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.463 [2024-09-27 13:27:07.998182] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.463 [2024-09-27 13:27:07.998390] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.463 ctrlr pubkey: 00:22:29.463 00000000 39 aa ae 11 c4 4c 10 86 31 f8 56 e3 da b4 5c f9 9....L..1.V...\. 00:22:29.463 00000010 d3 97 75 e2 b4 87 fc 16 93 a1 3b ad 17 e8 64 ef ..u.......;...d. 00:22:29.463 00000020 f3 39 42 ed 5a 1b 59 94 7a c8 9c 06 8b 4a f7 e3 .9B.Z.Y.z....J.. 00:22:29.463 00000030 1b f1 4d d5 81 60 49 f9 28 7d 85 8e 3f 1e 94 4d ..M..`I.(}..?..M 00:22:29.463 00000040 d1 15 89 34 b9 77 8c 06 49 01 f1 31 ef 78 af e5 ...4.w..I..1.x.. 00:22:29.463 00000050 12 23 64 8a 2b 43 23 09 56 89 df ab 5d 29 2a 36 .#d.+C#.V...])*6 00:22:29.463 00000060 ae ce 56 7b 5b d5 21 bd 1d d9 91 3d 52 99 3d 99 ..V{[.!....=R.=. 00:22:29.463 00000070 63 20 58 9d e3 97 1b 85 1e bf f8 07 93 b4 77 6c c X...........wl 00:22:29.463 00000080 da bb 95 50 0e fa 05 d6 75 2d c1 4c 89 3d fe b7 ...P....u-.L.=.. 00:22:29.463 00000090 5b ec 20 77 a1 94 6e 24 68 01 83 6d bd 97 ec ab [. w..n$h..m.... 00:22:29.464 000000a0 b0 62 39 8d 93 37 b4 f0 42 29 df 1e 7f 19 c6 45 .b9..7..B).....E 00:22:29.464 000000b0 62 75 c6 b6 94 9d 2f ce d9 40 42 68 75 63 04 21 bu..../..@Bhuc.! 00:22:29.464 000000c0 57 f8 4c ad dc 13 51 60 a9 a5 7a 0c 16 f3 a9 80 W.L...Q`..z..... 00:22:29.464 000000d0 e1 75 d4 4b f6 4b 63 38 b6 98 21 dc a9 fa e0 1f .u.K.Kc8..!..... 00:22:29.464 000000e0 3e 5b d8 f6 f5 48 c5 b6 e5 7b 2b e0 27 7b 40 58 >[...H...{+.'{@X 00:22:29.464 000000f0 cc 98 ea 22 6f 0c 81 01 ed c1 e7 b9 4d 27 c5 86 ..."o.......M'.. 00:22:29.464 00000100 aa 57 68 58 5f 52 fa c9 8f 1c f7 ab 72 b9 dc e9 .WhX_R......r... 00:22:29.464 00000110 2d 90 95 81 20 bd 6b 52 45 bc 95 8b ad 5f 70 e7 -... .kRE...._p. 00:22:29.464 00000120 95 c6 68 c0 aa 02 1f 5c 80 d2 28 fd 9f fd 49 71 ..h....\..(...Iq 00:22:29.464 00000130 2a 87 ba a9 fc 30 19 a7 fd b0 de bf 25 6d 0c 0e *....0......%m.. 00:22:29.464 00000140 d1 1e 13 b3 8b 8c 50 00 03 83 25 e5 5d 4b d7 76 ......P...%.]K.v 00:22:29.464 00000150 2d ad 09 68 a7 d2 1e 09 de 80 54 35 19 94 0f 0c -..h......T5.... 00:22:29.464 00000160 33 36 cb d8 64 67 c1 a5 e4 b3 a8 58 95 9e 0a 9c 36..dg.....X.... 00:22:29.464 00000170 ab d1 fc e1 65 2e f7 45 9e 85 5f 6b 6f d7 0e 45 ....e..E.._ko..E 00:22:29.464 00000180 e1 6c ef d9 4c 13 02 21 50 4f d6 0a ea a7 f8 24 .l..L..!PO.....$ 00:22:29.464 00000190 ff 49 eb 5c 9b ad 75 2d 2a a8 02 88 23 3d 76 0b .I.\..u-*...#=v. 00:22:29.464 000001a0 58 d9 e1 09 e3 d5 b9 4c 7f 11 cf ae 5c 36 24 34 X......L....\6$4 00:22:29.464 000001b0 2a e9 12 e7 6d b2 2c 9c 28 12 64 a1 3a af c0 d4 *...m.,.(.d.:... 00:22:29.464 000001c0 c8 0d ec 78 51 aa be bc 2d 3d a5 cf 48 d3 60 bc ...xQ...-=..H.`. 00:22:29.464 000001d0 8f 5b e0 db 89 2b e8 bf 43 95 62 08 4a e7 af 77 .[...+..C.b.J..w 00:22:29.464 000001e0 a7 b5 b3 1e 9e 77 63 1f b4 18 20 ab bf 0a 1b f0 .....wc... ..... 00:22:29.464 000001f0 55 0c 02 a9 74 e9 3f 56 46 db 5e 47 89 41 28 a5 U...t.?VF.^G.A(. 00:22:29.464 00000200 fb b8 27 ba 81 fa 88 6f bd c9 ef a0 f6 40 d8 83 ..'....o.....@.. 00:22:29.464 00000210 ba 17 33 cf 7d 61 ca d9 05 cd 58 a2 47 1e 0c 46 ..3.}a....X.G..F 00:22:29.464 00000220 ae 58 9f ac e8 47 90 8d d9 a7 89 4d f1 7e ed ac .X...G.....M.~.. 00:22:29.464 00000230 fa 34 4b d5 43 c4 5d 96 6d 96 13 14 ab 9c 86 94 .4K.C.].m....... 00:22:29.464 00000240 72 29 79 8d e9 d9 c9 a3 4e 46 22 84 4b 40 42 74 r)y.....NF".K@Bt 00:22:29.464 00000250 b8 41 4b a8 4b a4 42 49 d5 c6 3d 25 83 50 31 63 .AK.K.BI..=%.P1c 00:22:29.464 00000260 b9 db 29 06 6a 6a ef fa 00 b6 2f 39 a1 1f 8f 7a ..).jj..../9...z 00:22:29.464 00000270 af 1a 1f ce 87 0c 61 5b eb ae 64 51 c0 5c 97 74 ......a[..dQ.\.t 00:22:29.464 00000280 2c fb 34 25 77 f7 36 41 33 76 37 e5 65 d4 fc 20 ,.4%w.6A3v7.e.. 00:22:29.464 00000290 ef f3 42 42 36 a1 ac 00 f0 3b 7d c0 aa d8 84 ae ..BB6....;}..... 00:22:29.464 000002a0 3e 91 a1 f3 9f c8 00 e9 fa 3f d8 f8 8d c7 08 cf >........?...... 00:22:29.464 000002b0 24 d7 6e 58 1f 18 38 23 f5 8b c8 4b 71 b2 eb 2b $.nX..8#...Kq..+ 00:22:29.464 000002c0 7e 80 f9 1c 62 39 f0 6d 92 53 1d e0 71 ce 39 11 ~...b9.m.S..q.9. 00:22:29.464 000002d0 a8 26 19 4a ba 6c dc a9 3c 37 6d e4 c0 df 96 f2 .&.J.l..<7m..... 00:22:29.464 000002e0 d4 74 86 fb 59 05 11 76 77 f6 55 f2 05 a7 34 df .t..Y..vw.U...4. 00:22:29.464 000002f0 a0 32 e2 94 1e 57 39 fa af b4 c4 9c b1 41 05 70 .2...W9......A.p 00:22:29.464 host pubkey: 00:22:29.464 00000000 bd 51 ac b2 1f cd 6e ae d3 ac 11 76 48 aa d6 8a .Q....n....vH... 00:22:29.464 00000010 92 b1 f4 0e 1f cf a6 ce d5 97 76 33 8c f1 05 e7 ..........v3.... 00:22:29.464 00000020 5b 0a 05 68 a9 df 36 91 00 5a b5 4e 9b 24 5a 06 [..h..6..Z.N.$Z. 00:22:29.464 00000030 3b 79 c5 3c ad 48 d7 0d b9 58 99 1d 66 a3 26 b1 ;y.<.H...X..f.&. 00:22:29.464 00000040 9d ac 74 bc c6 c6 0a b4 27 5a 2e 77 e9 23 c0 2a ..t.....'Z.w.#.* 00:22:29.464 00000050 45 c7 dc ce a9 e8 d8 8d ee b6 33 91 a8 17 d3 11 E.........3..... 00:22:29.464 00000060 03 da 71 bc 52 9e c8 38 71 4c 74 b4 1f ae cd c4 ..q.R..8qLt..... 00:22:29.464 00000070 f8 5e fb ec 63 92 c7 ea e1 7c 4a d8 05 ff 89 2f .^..c....|J..../ 00:22:29.464 00000080 61 d4 e9 05 90 1a 72 61 b5 cc 11 99 16 82 b2 b3 a.....ra........ 00:22:29.464 00000090 05 6a d5 37 ec 0c 14 a3 d5 29 d8 ce f0 22 fb c5 .j.7.....)...".. 00:22:29.464 000000a0 d6 10 09 23 8a 0f 8c 18 fb 6d 26 a6 b2 0a a2 6a ...#.....m&....j 00:22:29.464 000000b0 75 11 fe 17 8b 3e ca b2 fd c8 bf 43 86 0c 9e 6f u....>.....C...o 00:22:29.464 000000c0 70 20 65 d8 0b f5 0a 0c d0 5d d1 aa 26 d8 4b 8a p e......]..&.K. 00:22:29.464 000000d0 e6 72 9b 82 61 8b e6 62 81 cd e0 93 28 91 c5 4d .r..a..b....(..M 00:22:29.464 000000e0 fa 6e 3e 74 47 44 74 18 c7 70 ad 54 95 27 8f 47 .n>tGDt..p.T.'.G 00:22:29.464 000000f0 44 06 39 74 7e 0e f2 31 48 19 b2 86 07 e8 31 c5 D.9t~..1H.....1. 00:22:29.464 00000100 4c 57 d4 20 28 f2 3b 3f 64 66 08 fe 12 dd ba df LW. (.;?df...... 00:22:29.464 00000110 4e 2a a1 d7 59 9b e2 b7 df f5 56 38 ab eb ae e3 N*..Y.....V8.... 00:22:29.464 00000120 9f 45 51 f9 25 4e 3a 78 48 d3 09 8d 1f f9 b1 f9 .EQ.%N:xH....... 00:22:29.464 00000130 5b f5 c5 d8 94 ad 2b d5 0e 28 70 46 70 48 c2 10 [.....+..(pFpH.. 00:22:29.464 00000140 e3 01 15 32 69 96 2c e3 89 3d 6f e5 5e 81 cb 14 ...2i.,..=o.^... 00:22:29.464 00000150 ee 31 57 1f 9d 59 84 26 17 82 15 b9 df 1c dc 68 .1W..Y.&.......h 00:22:29.464 00000160 65 c6 40 23 21 b9 06 d1 1d 99 1c f3 96 4b 29 a3 e.@#!........K). 00:22:29.464 00000170 27 bd e4 29 f2 87 25 a3 8d 97 4a 16 8f de fd be '..)..%...J..... 00:22:29.464 00000180 af a4 c3 b7 e4 17 a7 d0 80 e9 2c 12 49 a4 04 d9 ..........,.I... 00:22:29.464 00000190 53 67 ae 05 00 6e 38 fd ae 66 8c 4a b1 bc 36 98 Sg...n8..f.J..6. 00:22:29.464 000001a0 3e 6e 57 04 65 3d 50 70 6b 48 94 01 dd 06 13 d4 >nW.e=PpkH...... 00:22:29.464 000001b0 46 8d 9d 1c c9 c5 55 d2 00 87 42 f4 11 d7 e7 4c F.....U...B....L 00:22:29.464 000001c0 15 12 73 17 fb a9 45 ac 5e 70 f0 5d 69 58 5d 72 ..s...E.^p.]iX]r 00:22:29.464 000001d0 58 af 69 4d ee 65 3f 9d 51 9a c5 ed 8a 3a bc 7d X.iM.e?.Q....:.} 00:22:29.464 000001e0 4a f8 ff 5a d0 99 80 dc 2e 85 b5 f0 df e1 55 00 J..Z..........U. 00:22:29.464 000001f0 3f 96 e2 24 bf 98 40 e5 b1 37 7d c5 f2 45 ea f2 ?..$..@..7}..E.. 00:22:29.464 00000200 89 0e a2 bd 98 ed 8d 9c 4b 6c f8 a2 86 df b7 d5 ........Kl...... 00:22:29.464 00000210 74 94 87 47 28 8b 94 30 d8 08 77 56 f5 fa 32 20 t..G(..0..wV..2 00:22:29.464 00000220 f2 1d de 72 2d 65 a2 2f 2a 11 3b ea 4a 7a f0 c8 ...r-e./*.;.Jz.. 00:22:29.464 00000230 c3 9f 3b ec f9 ca 45 f0 1c 09 09 dc 0a 41 be 5f ..;...E......A._ 00:22:29.464 00000240 b0 d8 c4 39 d7 aa 9c 24 74 0a 2d c0 92 9c eb 78 ...9...$t.-....x 00:22:29.464 00000250 9a 00 0e 53 68 e4 69 1f b9 15 f5 2c dc 3f 6a bb ...Sh.i....,.?j. 00:22:29.464 00000260 9d e9 39 90 59 4d 05 f5 79 a3 65 4e 25 46 09 bf ..9.YM..y.eN%F.. 00:22:29.464 00000270 68 87 4a 48 35 77 ba 10 fc fd 7e 6d cb 6c 06 b3 h.JH5w....~m.l.. 00:22:29.464 00000280 ab 16 70 d2 6a a9 0d 88 1b 18 c5 c7 b5 3c 92 90 ..p.j........<.. 00:22:29.464 00000290 6e b4 02 fe 87 19 f4 af a4 e9 b7 d9 bf 04 b2 91 n............... 00:22:29.464 000002a0 fa 71 41 ce b4 48 47 dc 11 bc d4 c1 94 ff 80 30 .qA..HG........0 00:22:29.464 000002b0 45 f2 de 54 3a 3b d5 29 ec 60 e2 b0 8b 14 0e 4a E..T:;.).`.....J 00:22:29.464 000002c0 8d 10 0c e7 85 8a 57 50 fb e1 45 a7 2a 3a 93 d2 ......WP..E.*:.. 00:22:29.464 000002d0 12 b4 48 a4 66 a8 fd f7 9a 31 c4 48 ca 09 0a c7 ..H.f....1.H.... 00:22:29.464 000002e0 46 07 ca f5 d8 48 3d 46 a3 ac 8b b7 95 d5 44 ce F....H=F......D. 00:22:29.464 000002f0 c6 63 64 eb 00 07 1a ba 65 9d 72 11 e2 d1 48 3e .cd.....e.r...H> 00:22:29.464 dh secret: 00:22:29.464 00000000 09 9d 35 ac 2b fe e2 fe af d1 9f e1 83 93 8e 03 ..5.+........... 00:22:29.464 00000010 5d c7 67 ae fc 61 a8 c8 94 12 ed 3f a0 62 6a 23 ].g..a.....?.bj# 00:22:29.464 00000020 e1 84 bc ef a1 bd c8 f9 a2 a2 3a 34 a3 3a da fa ..........:4.:.. 00:22:29.464 00000030 6f a0 df e3 a9 c6 45 02 a7 f2 76 a9 d7 4e 84 d6 o.....E...v..N.. 00:22:29.464 00000040 e3 16 79 0f 38 33 98 f9 40 84 e6 a2 12 64 8e 6a ..y.83..@....d.j 00:22:29.464 00000050 b4 c7 dd f3 f2 36 0b 9b 0b 84 27 69 4b 51 99 a3 .....6....'iKQ.. 00:22:29.464 00000060 67 7d b8 93 fb 09 a8 fe 2f b5 3c 9d ea a8 b8 65 g}....../.<....e 00:22:29.464 00000070 a8 90 e0 41 a9 ed 58 5d 7d 38 e4 b8 d5 92 fc c9 ...A..X]}8...... 00:22:29.464 00000080 5c 8b c0 d8 81 69 8a d9 fa 4c 84 cf 6d a6 81 7a \....i...L..m..z 00:22:29.464 00000090 0d 84 2f e7 e6 3f 3a aa 6f 45 5a 5a 85 e2 d7 86 ../..?:.oEZZ.... 00:22:29.464 000000a0 1a 18 c3 fc 49 45 fd d2 e8 95 10 9d ed eb dd 46 ....IE.........F 00:22:29.464 000000b0 6e a9 d3 19 37 6b eb 49 75 45 ab 02 0f 22 a4 79 n...7k.IuE...".y 00:22:29.464 000000c0 e3 24 cf 91 8a 0e 9b e6 35 fa 98 32 60 3f 06 fd .$......5..2`?.. 00:22:29.464 000000d0 10 c2 cc cd 28 02 50 f1 c7 96 4e 25 ef 30 69 0a ....(.P...N%.0i. 00:22:29.464 000000e0 0c fd 97 df 0b af 18 08 18 36 f6 0d 59 0b b7 25 .........6..Y..% 00:22:29.464 000000f0 53 a0 0a 40 ec 7e 60 79 3f ef e9 30 8d 23 dc b5 S..@.~`y?..0.#.. 00:22:29.464 00000100 1a 5a 46 a8 01 61 32 ad 84 e4 8a 7d 81 66 3b 96 .ZF..a2....}.f;. 00:22:29.464 00000110 ec 00 f5 26 b0 38 ec 37 c1 b2 71 1e 87 77 c3 c3 ...&.8.7..q..w.. 00:22:29.464 00000120 7f 29 10 f6 2d 12 8f a4 f0 bf 9c 2d bb 7b 4d 84 .)..-......-.{M. 00:22:29.464 00000130 56 68 c1 e1 e1 04 61 b9 4b 6d 59 cb a2 ae 05 e6 Vh....a.KmY..... 00:22:29.464 00000140 aa 98 55 b1 66 92 f5 1f ac 15 82 08 2e f9 a9 66 ..U.f..........f 00:22:29.464 00000150 9d fc f3 6f ed 53 e6 24 93 7b 1c 2e e8 3e 94 83 ...o.S.$.{...>.. 00:22:29.464 00000160 20 aa e2 dd 27 78 36 8d 2d 6e 2b 67 7d 57 d6 7c ...'x6.-n+g}W.| 00:22:29.464 00000170 32 58 60 a0 f7 78 40 70 91 bf 66 8b 99 01 6d ff 2X`..x@p..f...m. 00:22:29.464 00000180 70 ca c1 5c a3 71 3d 49 e1 82 c2 e0 5f a4 6b ad p..\.q=I...._.k. 00:22:29.464 00000190 7d 76 81 93 37 f4 8d 54 7c 08 56 b1 d0 b9 3a 58 }v..7..T|.V...:X 00:22:29.464 000001a0 d1 17 c8 04 7f 07 7b 3b 81 23 ce 76 6e 05 e2 53 ......{;.#.vn..S 00:22:29.464 000001b0 b5 6c 56 2c 88 36 0f dd 7e 5f 39 39 04 d3 9f 60 .lV,.6..~_99...` 00:22:29.464 000001c0 85 a7 c9 f2 5e 32 63 80 41 fc ef 2e 49 ba 6f 50 ....^2c.A...I.oP 00:22:29.464 000001d0 17 b4 fc 78 66 61 97 bd 6a e9 c5 4e 52 d7 8b 57 ...xfa..j..NR..W 00:22:29.464 000001e0 ec 60 14 c0 b7 bd 8c 06 5d 17 8a 34 c8 b2 ec bc .`......]..4.... 00:22:29.464 000001f0 f8 be 16 86 19 0b bf bd 62 8f 0f 76 97 21 41 0f ........b..v.!A. 00:22:29.464 00000200 83 a1 68 3c 3d 68 02 be 32 57 90 82 75 30 81 ff ..h<=h..2W..u0.. 00:22:29.464 00000210 f0 38 82 da 0b a9 15 78 81 1a 8c 82 ae 14 04 3c .8.....x.......< 00:22:29.464 00000220 39 8c 1c 12 1d cb a4 74 42 4b f8 81 10 70 81 39 9......tBK...p.9 00:22:29.464 00000230 b7 0b 48 bd db 81 d0 b8 de 0e a6 eb da 59 ce 57 ..H..........Y.W 00:22:29.464 00000240 d1 49 9a f6 47 70 5d f4 db eb f8 5a b3 0c e8 80 .I..Gp]....Z.... 00:22:29.464 00000250 4e ea 64 d9 80 f8 00 91 ac fe 3a 9d 0b 72 bd c8 N.d.......:..r.. 00:22:29.464 00000260 b4 eb 3e 5d 23 63 38 f8 44 28 c9 68 07 5b 47 8b ..>]#c8.D(.h.[G. 00:22:29.464 00000270 17 6f 1e 40 b2 30 72 24 37 c9 b3 ca 4f d4 5d 03 .o.@.0r$7...O.]. 00:22:29.464 00000280 47 04 dd 23 79 5c eb 13 02 12 36 d3 08 8e ce 45 G..#y\....6....E 00:22:29.464 00000290 a4 72 04 3a 56 a2 ce 69 42 08 97 7d 64 da 2c 76 .r.:V..iB..}d.,v 00:22:29.464 000002a0 3d 6c ae 52 a1 b4 d2 44 62 93 4e 98 2f fc e2 02 =l.R...Db.N./... 00:22:29.464 000002b0 ca eb 17 55 14 63 b5 c4 16 28 cc e7 01 53 6b 36 ...U.c...(...Sk6 00:22:29.464 000002c0 a5 8c 4d a5 dd d8 64 1a bd fc c0 37 40 63 8b 4b ..M...d....7@c.K 00:22:29.464 000002d0 d8 2b 89 11 f6 63 66 6a 5e d1 6c 2b 2c f7 39 61 .+...cfj^.l+,.9a 00:22:29.464 000002e0 de b5 b7 0c 7b 02 be 11 fe f6 13 25 82 a0 d1 ea ....{......%.... 00:22:29.464 000002f0 9f a9 41 3a 02 50 40 8c 41 31 72 2d d5 90 d1 60 ..A:.P@.A1r-...` 00:22:29.464 [2024-09-27 13:27:08.072341] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key3, hash=1, dhgroup=4, seq=3775755210, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.465 [2024-09-27 13:27:08.072706] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.465 [2024-09-27 13:27:08.127563] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.465 [2024-09-27 13:27:08.128079] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.465 [2024-09-27 13:27:08.128394] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.465 [2024-09-27 13:27:08.128772] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.465 [2024-09-27 13:27:08.259013] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.465 [2024-09-27 13:27:08.259237] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.465 [2024-09-27 13:27:08.259345] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.465 [2024-09-27 13:27:08.259466] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.465 [2024-09-27 13:27:08.259655] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.465 ctrlr pubkey: 00:22:29.465 00000000 7b f9 21 16 76 62 d7 ad cc ae 5f 9d 23 60 28 a4 {.!.vb...._.#`(. 00:22:29.465 00000010 0f 5e 45 70 24 9a 3a 1a 17 7c 19 54 79 0d a7 65 .^Ep$.:..|.Ty..e 00:22:29.465 00000020 2b 60 a2 59 7a ae 7b 4c 4b 93 3f 2e 40 33 06 5d +`.Yz.{LK.?.@3.] 00:22:29.465 00000030 df 5a 57 81 8b 5c af 4a b0 0f 4c 08 4e b5 f8 d2 .ZW..\.J..L.N... 00:22:29.465 00000040 bb 8d 82 22 64 de 98 82 ee 3e c7 e2 72 77 89 e5 ..."d....>..rw.. 00:22:29.465 00000050 ce 2a 2a d5 ce 52 66 17 6b d6 9e 74 f7 32 2f 76 .**..Rf.k..t.2/v 00:22:29.465 00000060 75 91 df c6 34 93 f6 7f 34 4d ed db b0 2a f9 13 u...4...4M...*.. 00:22:29.465 00000070 6c a2 8e 28 2f 09 cd 69 a6 80 1c 19 97 88 ac c2 l..(/..i........ 00:22:29.465 00000080 b0 03 a9 2d 07 3b 48 d5 e5 df d9 3c 98 d2 b6 3f ...-.;H....<...? 00:22:29.465 00000090 41 6f 42 76 17 5d 3d e4 28 a8 e4 a8 da 67 06 90 AoBv.]=.(....g.. 00:22:29.465 000000a0 8f 03 45 00 65 94 60 f4 46 92 1d 97 41 3f 6e a0 ..E.e.`.F...A?n. 00:22:29.465 000000b0 ca 86 96 bc 5e 4b 67 5d f9 f0 c2 f6 a4 fe c2 9b ....^Kg]........ 00:22:29.465 000000c0 42 48 5e 1b 11 56 e3 a7 c6 bb f2 ff 52 c1 eb 90 BH^..V......R... 00:22:29.465 000000d0 bf 19 62 39 80 24 fd 0f 93 22 c6 04 ea a9 95 86 ..b9.$..."...... 00:22:29.465 000000e0 0c a7 71 8a 95 cb 4d aa 4d e9 19 18 37 0a 96 04 ..q...M.M...7... 00:22:29.465 000000f0 a5 f6 8c a2 52 0b d3 71 7d a8 f7 bf 45 d6 91 d3 ....R..q}...E... 00:22:29.465 00000100 1c 11 e4 1f 3a 0e 4f d8 9a a0 6b 49 c2 c9 cb b3 ....:.O...kI.... 00:22:29.465 00000110 ce f7 96 f7 64 e5 79 cd 68 55 34 a7 4e 44 d4 d2 ....d.y.hU4.ND.. 00:22:29.465 00000120 29 70 0d b6 3f 13 f9 b7 8a 61 39 a1 9c 95 1d 39 )p..?....a9....9 00:22:29.465 00000130 d5 47 9c 0a 2a 5d 4d 41 53 b5 2d 54 5a 6d 25 06 .G..*]MAS.-TZm%. 00:22:29.465 00000140 b3 db 6d e7 f9 94 d0 71 40 a5 49 be 68 81 7f 3d ..m....q@.I.h..= 00:22:29.465 00000150 bf 60 20 17 c1 b9 21 5f 0c f0 e0 5d a4 45 7a 66 .` ...!_...].Ezf 00:22:29.465 00000160 2d ec 8c 9c af 9a 5a 80 ee bb cb 41 1f d7 6f be -.....Z....A..o. 00:22:29.465 00000170 96 9d 15 c9 81 8e 74 8f 81 34 28 2e 48 34 8a 3f ......t..4(.H4.? 00:22:29.465 00000180 58 37 cc 59 78 42 ce 7d 6a 12 26 dd 02 59 49 e7 X7.YxB.}j.&..YI. 00:22:29.465 00000190 2f be 62 d7 b1 29 97 e8 a7 d3 8c e0 f3 f0 d6 42 /.b..).........B 00:22:29.465 000001a0 ee 8a 38 c0 5a 30 d1 0c 54 14 9a 9a be b4 d2 51 ..8.Z0..T......Q 00:22:29.465 000001b0 6d 01 29 fb 8b 63 df 3b 0b c4 5b e5 5d 25 79 7e m.)..c.;..[.]%y~ 00:22:29.465 000001c0 6f 25 54 fd b8 08 7e 71 1d 1d fd 21 37 0e 30 8c o%T...~q...!7.0. 00:22:29.465 000001d0 9c 0f 07 19 4c c8 0c 74 22 26 67 81 c6 48 5a 33 ....L..t"&g..HZ3 00:22:29.465 000001e0 53 84 33 b5 f2 59 da 06 22 59 f6 72 ec 2e f5 9e S.3..Y.."Y.r.... 00:22:29.465 000001f0 a8 af bb 9b c1 43 c5 8c 9e 03 d3 0e cb ff 18 a2 .....C.......... 00:22:29.465 00000200 14 3c 0b a1 07 0e 3b 70 1f 30 09 f8 ad d6 ee f4 .<....;p.0...... 00:22:29.465 00000210 08 6e 8c bd 3f 57 ca 6d a5 2e e3 4a 8f ff b8 c8 .n..?W.m...J.... 00:22:29.465 00000220 2a 6a e2 74 cd 2c 1f 70 52 56 35 9e c1 4a 07 d4 *j.t.,.pRV5..J.. 00:22:29.465 00000230 ad b1 2c f1 df 7b b0 43 cd 44 5d 37 85 59 11 fb ..,..{.C.D]7.Y.. 00:22:29.465 00000240 1c fc 63 79 2e 02 27 19 61 f9 7f 1c fd 89 58 14 ..cy..'.a.....X. 00:22:29.465 00000250 6d 2a 52 44 18 b8 61 b1 29 a7 62 06 10 07 b3 9b m*RD..a.).b..... 00:22:29.465 00000260 80 69 9c 53 a2 39 39 9f 0d 2d d5 55 e1 06 38 fe .i.S.99..-.U..8. 00:22:29.465 00000270 1b 05 20 cd f6 0b de 46 31 87 ff 08 e7 46 a6 38 .. ....F1....F.8 00:22:29.465 00000280 48 97 75 0a 6b b6 4b ac ca ad 06 78 45 77 a3 dd H.u.k.K....xEw.. 00:22:29.465 00000290 fc af bc 48 10 24 92 41 e1 07 b3 8d 72 3f e5 a5 ...H.$.A....r?.. 00:22:29.465 000002a0 6b 43 3a 47 93 1b 64 5a 0d e0 a9 28 4e 1f de 33 kC:G..dZ...(N..3 00:22:29.465 000002b0 d2 2c 33 fc 40 40 5a 27 45 55 41 b9 0d 72 f5 43 .,3.@@Z'EUA..r.C 00:22:29.465 000002c0 74 e9 f7 b0 6a cf 3f 93 fc 01 a1 8c 53 32 ee 64 t...j.?.....S2.d 00:22:29.465 000002d0 fc 58 0c 50 2a 1c c0 36 31 10 b9 b7 24 b2 77 90 .X.P*..61...$.w. 00:22:29.465 000002e0 67 cf 26 2b f9 79 fc 57 0e 0b a6 1b de ab 03 f1 g.&+.y.W........ 00:22:29.465 000002f0 59 8b 89 aa 2f 63 0e e2 b0 6d 2a c0 e2 21 02 0e Y.../c...m*..!.. 00:22:29.465 host pubkey: 00:22:29.465 00000000 9a 1d cc 22 96 9d 71 8a c8 7b 33 29 12 b4 7d 36 ..."..q..{3)..}6 00:22:29.465 00000010 e8 30 7d 0e 02 5a 88 59 d3 2d ed aa 3f 57 73 77 .0}..Z.Y.-..?Wsw 00:22:29.465 00000020 af 9e 28 59 2d 82 05 fd e1 33 54 af 5b 76 1e a0 ..(Y-....3T.[v.. 00:22:29.465 00000030 ef b7 a6 e5 a2 12 bd 0b 93 7d aa 3a a9 e2 28 a4 .........}.:..(. 00:22:29.465 00000040 b6 e7 a0 fc b8 de b3 fe 0a 54 7d 2f 65 7d d3 09 .........T}/e}.. 00:22:29.465 00000050 38 36 a3 1e f1 df 4c e1 8f 89 03 27 9c 0b 83 24 86....L....'...$ 00:22:29.465 00000060 05 02 d3 bc 62 e6 ea 52 03 e4 06 2d 36 16 1d 93 ....b..R...-6... 00:22:29.465 00000070 49 94 b6 74 be 58 0a 3e fd f8 b7 38 8a e6 9a b7 I..t.X.>...8.... 00:22:29.465 00000080 63 ff 05 b0 74 11 c0 db ce b4 9a aa d8 61 54 6f c...t........aTo 00:22:29.465 00000090 7f 3a 1c 20 f7 4d 23 ce 45 bf c6 aa ed 6e 69 b5 .:. .M#.E....ni. 00:22:29.465 000000a0 e8 99 9f 7f be 56 2f 03 40 b7 ec 24 17 74 3c 17 .....V/.@..$.t<. 00:22:29.465 000000b0 ea e1 e5 13 d2 a2 32 e1 be 80 6a e4 df 8e 20 a3 ......2...j... . 00:22:29.465 000000c0 44 0f ec af 8c 68 6e 6a 84 55 16 bf e5 74 b1 39 D....hnj.U...t.9 00:22:29.465 000000d0 35 1b 20 b4 e5 e4 1e 80 2d 52 ae 15 74 b3 e5 55 5. .....-R..t..U 00:22:29.465 000000e0 a6 9a dc 41 2c cf 8a 27 bb 6d 05 96 0a 77 0c 8f ...A,..'.m...w.. 00:22:29.465 000000f0 82 f1 61 75 89 29 77 ba f9 51 69 1f e8 4d 49 af ..au.)w..Qi..MI. 00:22:29.465 00000100 ce 1d 43 bd 3a c1 f0 23 f0 3e 32 04 29 42 29 f7 ..C.:..#.>2.)B). 00:22:29.465 00000110 2d 80 b3 56 d8 c8 9a e7 00 a3 0f 25 ed 2d 68 65 -..V.......%.-he 00:22:29.465 00000120 2f b6 d1 bd 34 66 0a ea d7 8a 4a af 8e 5c aa f9 /...4f....J..\.. 00:22:29.465 00000130 2e 6d fd b6 0d 53 11 5e f4 90 bb a2 dc 4b 42 52 .m...S.^.....KBR 00:22:29.465 00000140 b7 2c de 4a dc fd 86 53 53 eb d1 c2 98 21 21 ae .,.J...SS....!!. 00:22:29.465 00000150 2b 85 9c 93 d7 f0 1c fd 14 89 5c 4f aa ed 3f cc +.........\O..?. 00:22:29.465 00000160 99 2d 02 32 42 0e 4e 8d 6b 40 01 49 1a ae a6 95 .-.2B.N.k@.I.... 00:22:29.465 00000170 20 74 ae 52 de 84 65 f2 de af da 1e a6 d9 ea 82 t.R..e......... 00:22:29.465 00000180 c4 a2 24 c8 c0 ec 0c 1f f4 6a 44 29 f4 61 2c 6f ..$......jD).a,o 00:22:29.465 00000190 ff 9a a9 c9 d2 b4 d4 7e 04 a0 46 17 63 33 4f 63 .......~..F.c3Oc 00:22:29.465 000001a0 ed 7b fd 07 b9 e1 72 e5 78 06 66 3d cd a3 1d ff .{....r.x.f=.... 00:22:29.465 000001b0 06 e0 f0 a9 b0 30 4a b7 85 7d 41 9a c0 40 b0 b5 .....0J..}A..@.. 00:22:29.465 000001c0 43 5d 00 eb 2d c4 94 0f f5 24 14 e7 40 38 8d a3 C]..-....$..@8.. 00:22:29.465 000001d0 39 7a 26 a5 23 38 fa 33 be e4 f3 42 b9 9c b6 5a 9z&.#8.3...B...Z 00:22:29.465 000001e0 be 65 09 69 51 76 32 f5 60 ae d9 61 fc e4 03 d8 .e.iQv2.`..a.... 00:22:29.465 000001f0 1f 73 53 8f ad 51 3f d8 d4 65 a1 a7 a8 1b cd 26 .sS..Q?..e.....& 00:22:29.465 00000200 e3 b9 e5 80 d5 a4 87 97 f7 db 7a 1d d9 1f 64 fe ..........z...d. 00:22:29.465 00000210 2d 2d 96 12 20 0f 32 17 67 57 31 e4 24 6d b3 c7 --.. .2.gW1.$m.. 00:22:29.465 00000220 de 5c e7 a7 e4 14 63 fe c4 ea 38 36 a5 d8 04 31 .\....c...86...1 00:22:29.465 00000230 39 9f 77 c4 a8 fe 99 4a 76 3e 58 a8 67 59 c9 99 9.w....Jv>X.gY.. 00:22:29.466 00000240 8f 3b 01 e7 5c 95 aa 9a 26 ad 63 28 2e 10 f6 26 .;..\...&.c(...& 00:22:29.466 00000250 10 85 69 ee df 57 b6 0f d0 c8 4f 6c 78 ab 2a a8 ..i..W....Olx.*. 00:22:29.466 00000260 c0 8d db fe 4d b4 50 ae df 6c 3c da bd 6d cb 02 ....M.P..l<..m.. 00:22:29.466 00000270 58 99 47 4e b4 aa e9 4b 9b 49 e9 67 6d 8a d4 5b X.GN...K.I.gm..[ 00:22:29.466 00000280 65 c1 78 cf 72 7c 12 1b 3f fb 80 ef 8f 2f b1 53 e.x.r|..?..../.S 00:22:29.466 00000290 91 c0 96 82 78 a6 d8 4b aa 69 de 01 d2 bf 09 75 ....x..K.i.....u 00:22:29.466 000002a0 d9 b6 8b 91 f6 32 d1 51 32 5a a9 d4 34 b8 8b 4e .....2.Q2Z..4..N 00:22:29.466 000002b0 a2 ea c2 ec 62 90 66 87 60 8c 92 4d b9 0d 4c af ....b.f.`..M..L. 00:22:29.466 000002c0 3f df 85 e2 83 c5 a2 40 9d 00 ef 5f d2 8f 53 05 ?......@..._..S. 00:22:29.466 000002d0 47 48 6b 16 25 eb 17 e1 ef 7c 9a b8 bf 6f a8 6a GHk.%....|...o.j 00:22:29.466 000002e0 c8 18 82 a1 34 03 07 6f 19 4c e9 e7 38 bf a1 f2 ....4..o.L..8... 00:22:29.466 000002f0 76 47 25 10 03 fa 7d d3 52 b6 7d e8 cc 8f 37 ec vG%...}.R.}...7. 00:22:29.466 dh secret: 00:22:29.466 00000000 f5 bc b0 bc 80 92 e3 27 ca 4b c1 42 f8 c2 1b d8 .......'.K.B.... 00:22:29.466 00000010 38 ba a8 ad cb a3 63 ad 95 ec dd 7d 9e 08 df 52 8.....c....}...R 00:22:29.466 00000020 a9 9c 96 41 db 4a 30 4c ab e6 13 92 4b 30 3e 15 ...A.J0L....K0>. 00:22:29.466 00000030 5f 6a 56 80 30 c6 f7 c9 c9 66 81 4a 6e 93 18 c2 _jV.0....f.Jn... 00:22:29.466 00000040 d8 ef e7 d0 87 ea fc d1 b2 02 a6 3a d9 8c 53 0d ...........:..S. 00:22:29.466 00000050 bf d9 05 1b d0 5e fd 0e f6 48 d3 b6 44 5e c3 b0 .....^...H..D^.. 00:22:29.466 00000060 e1 f4 0b 10 25 af 95 a6 f0 6f d1 06 fc 9c 22 6a ....%....o...."j 00:22:29.466 00000070 9e 34 8c 23 b2 c8 79 98 d3 8c fa df 63 9f 5e 46 .4.#..y.....c.^F 00:22:29.466 00000080 6a bf 3c e9 b3 11 7f 3a 7d 96 89 8c 28 51 85 24 j.<....:}...(Q.$ 00:22:29.466 00000090 da 28 3d 8c a3 e2 f0 1f 27 38 4f 10 e2 f8 53 19 .(=.....'8O...S. 00:22:29.466 000000a0 76 0a e5 29 10 93 29 ef e0 ae 8d 8e c8 a1 ca 99 v..)..)......... 00:22:29.466 000000b0 57 6e 3e 49 f6 f2 ed ba cd 48 07 67 83 b6 dc 5c Wn>I.....H.g...\ 00:22:29.466 000000c0 3d fc 7f b6 19 9d 25 f5 d4 30 82 bb c9 06 51 36 =.....%..0....Q6 00:22:29.466 000000d0 03 c4 9a b0 96 9f 6e 7e 7d f5 79 fd e5 f2 43 21 ......n~}.y...C! 00:22:29.466 000000e0 93 f1 6a 3b c2 1a d8 87 1e 1d d9 00 89 66 87 51 ..j;.........f.Q 00:22:29.466 000000f0 f9 1c 27 88 cb 50 bb 73 cd f0 d0 41 44 11 79 96 ..'..P.s...AD.y. 00:22:29.466 00000100 20 cf fc b6 03 d8 35 ac b1 73 70 68 e5 95 df 3e .....5..sph...> 00:22:29.466 00000110 99 ed b5 5b 4f dc 93 d8 4b 05 58 12 74 e6 42 66 ...[O...K.X.t.Bf 00:22:29.466 00000120 af cb ac e7 aa d9 23 f8 2f 29 4a 23 8a e2 9e d4 ......#./)J#.... 00:22:29.466 00000130 36 ea b0 6e fa db 80 cb f0 e9 91 62 87 5a 01 a1 6..n.......b.Z.. 00:22:29.466 00000140 f5 42 7a 16 24 7f 26 4c 92 c0 19 50 98 5b 5b f5 .Bz.$.&L...P.[[. 00:22:29.466 00000150 42 2d b4 e6 b0 cb 9d 0c b2 07 69 c7 14 c3 a6 90 B-........i..... 00:22:29.466 00000160 64 bb dc ce 39 8a 68 0c 4a 1a e9 11 7c b0 6f 2e d...9.h.J...|.o. 00:22:29.466 00000170 83 7a 4f ec f8 f9 34 34 28 76 30 e2 aa 0b 6e eb .zO...44(v0...n. 00:22:29.466 00000180 80 0e 7d 0b c0 72 3b 44 4d d7 41 b5 89 ad 38 e8 ..}..r;DM.A...8. 00:22:29.466 00000190 f5 5b 31 80 72 0e 31 1f 07 50 a6 5e b9 52 ce 1a .[1.r.1..P.^.R.. 00:22:29.466 000001a0 06 f1 a0 f4 ac 8c 34 54 02 65 0d 86 d4 34 55 3d ......4T.e...4U= 00:22:29.466 000001b0 92 08 c0 4e 35 ae da 11 62 f4 5f 37 dc f8 36 50 ...N5...b._7..6P 00:22:29.466 000001c0 28 21 be 7a 94 d8 50 c2 b5 cc aa 82 06 36 af 31 (!.z..P......6.1 00:22:29.466 000001d0 90 2f 57 cc 23 2b bb 65 58 60 16 a6 33 44 bf f9 ./W.#+.eX`..3D.. 00:22:29.466 000001e0 f8 92 5f f5 09 b7 08 dd 63 ec 2b d9 02 ed ee b6 .._.....c.+..... 00:22:29.466 000001f0 02 94 8e 54 85 58 80 55 46 f9 7c 85 1f ba 35 f9 ...T.X.UF.|...5. 00:22:29.466 00000200 89 da 4e 6d 71 93 ae 19 88 96 d1 b4 e1 e9 9a a6 ..Nmq........... 00:22:29.466 00000210 60 4b bf d1 be a7 fa 89 b2 dd cb a6 1a 2f 9f bf `K.........../.. 00:22:29.466 00000220 91 a0 57 27 bb c5 eb 97 df 56 2f cd 9a a8 24 3e ..W'.....V/...$> 00:22:29.466 00000230 14 de b0 0a 6a 76 9a 1d db ff 5d 73 10 8f ca 32 ....jv....]s...2 00:22:29.466 00000240 06 6f 3e ef 6d d0 0a be 78 55 44 15 d6 60 7c 71 .o>.m...xUD..`|q 00:22:29.466 00000250 f2 14 f2 36 c7 30 b2 ec 97 cd b6 b7 d7 9b 2f 11 ...6.0......../. 00:22:29.466 00000260 b0 c0 ff 26 7e a1 fc 0d 35 a0 13 61 a0 19 41 7f ...&~...5..a..A. 00:22:29.466 00000270 62 e9 c3 d9 41 e2 b6 63 b0 c5 bd 60 fa 07 02 5c b...A..c...`...\ 00:22:29.466 00000280 4f bf d8 78 6b ee 2e 51 2f 0b 51 0c f8 28 f8 6d O..xk..Q/.Q..(.m 00:22:29.466 00000290 fa b3 50 7e 0a 90 33 c9 24 7d 36 43 2a ee c2 2d ..P~..3.$}6C*..- 00:22:29.466 000002a0 af 8c 27 46 5d e8 f9 d7 be 08 66 54 0c f4 6a 65 ..'F].....fT..je 00:22:29.466 000002b0 a6 7b d1 4a 4f a8 dd 1b fe 5b d2 95 2b 37 4a d2 .{.JO....[..+7J. 00:22:29.466 000002c0 88 1e 65 9c b4 ac f5 67 e4 18 31 74 23 18 aa 1d ..e....g..1t#... 00:22:29.466 000002d0 db f7 2d 44 85 22 41 d0 48 95 f3 1f 3d 5f 6c 45 ..-D."A.H...=_lE 00:22:29.466 000002e0 8b 84 ad 30 ce ae 34 eb 49 cb 09 dd 5c 5f 9a 2e ...0..4.I...\_.. 00:22:29.466 000002f0 3a 38 71 2a 6f 3c 9c 62 50 9d 19 70 2b 26 9c c6 :8q*o<.bP..p+&.. 00:22:29.466 [2024-09-27 13:27:08.332234] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key4, hash=1, dhgroup=4, seq=3775755211, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.466 [2024-09-27 13:27:08.332534] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.466 [2024-09-27 13:27:08.384492] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.466 [2024-09-27 13:27:08.384931] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.466 [2024-09-27 13:27:08.385166] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.466 [2024-09-27 13:27:08.437104] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.466 [2024-09-27 13:27:08.437364] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.466 [2024-09-27 13:27:08.437657] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.466 [2024-09-27 13:27:08.437801] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.466 [2024-09-27 13:27:08.438109] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.466 ctrlr pubkey: 00:22:29.466 00000000 7b f9 21 16 76 62 d7 ad cc ae 5f 9d 23 60 28 a4 {.!.vb...._.#`(. 00:22:29.466 00000010 0f 5e 45 70 24 9a 3a 1a 17 7c 19 54 79 0d a7 65 .^Ep$.:..|.Ty..e 00:22:29.466 00000020 2b 60 a2 59 7a ae 7b 4c 4b 93 3f 2e 40 33 06 5d +`.Yz.{LK.?.@3.] 00:22:29.466 00000030 df 5a 57 81 8b 5c af 4a b0 0f 4c 08 4e b5 f8 d2 .ZW..\.J..L.N... 00:22:29.466 00000040 bb 8d 82 22 64 de 98 82 ee 3e c7 e2 72 77 89 e5 ..."d....>..rw.. 00:22:29.466 00000050 ce 2a 2a d5 ce 52 66 17 6b d6 9e 74 f7 32 2f 76 .**..Rf.k..t.2/v 00:22:29.466 00000060 75 91 df c6 34 93 f6 7f 34 4d ed db b0 2a f9 13 u...4...4M...*.. 00:22:29.466 00000070 6c a2 8e 28 2f 09 cd 69 a6 80 1c 19 97 88 ac c2 l..(/..i........ 00:22:29.466 00000080 b0 03 a9 2d 07 3b 48 d5 e5 df d9 3c 98 d2 b6 3f ...-.;H....<...? 00:22:29.466 00000090 41 6f 42 76 17 5d 3d e4 28 a8 e4 a8 da 67 06 90 AoBv.]=.(....g.. 00:22:29.466 000000a0 8f 03 45 00 65 94 60 f4 46 92 1d 97 41 3f 6e a0 ..E.e.`.F...A?n. 00:22:29.466 000000b0 ca 86 96 bc 5e 4b 67 5d f9 f0 c2 f6 a4 fe c2 9b ....^Kg]........ 00:22:29.466 000000c0 42 48 5e 1b 11 56 e3 a7 c6 bb f2 ff 52 c1 eb 90 BH^..V......R... 00:22:29.466 000000d0 bf 19 62 39 80 24 fd 0f 93 22 c6 04 ea a9 95 86 ..b9.$..."...... 00:22:29.466 000000e0 0c a7 71 8a 95 cb 4d aa 4d e9 19 18 37 0a 96 04 ..q...M.M...7... 00:22:29.466 000000f0 a5 f6 8c a2 52 0b d3 71 7d a8 f7 bf 45 d6 91 d3 ....R..q}...E... 00:22:29.466 00000100 1c 11 e4 1f 3a 0e 4f d8 9a a0 6b 49 c2 c9 cb b3 ....:.O...kI.... 00:22:29.466 00000110 ce f7 96 f7 64 e5 79 cd 68 55 34 a7 4e 44 d4 d2 ....d.y.hU4.ND.. 00:22:29.466 00000120 29 70 0d b6 3f 13 f9 b7 8a 61 39 a1 9c 95 1d 39 )p..?....a9....9 00:22:29.466 00000130 d5 47 9c 0a 2a 5d 4d 41 53 b5 2d 54 5a 6d 25 06 .G..*]MAS.-TZm%. 00:22:29.466 00000140 b3 db 6d e7 f9 94 d0 71 40 a5 49 be 68 81 7f 3d ..m....q@.I.h..= 00:22:29.466 00000150 bf 60 20 17 c1 b9 21 5f 0c f0 e0 5d a4 45 7a 66 .` ...!_...].Ezf 00:22:29.466 00000160 2d ec 8c 9c af 9a 5a 80 ee bb cb 41 1f d7 6f be -.....Z....A..o. 00:22:29.466 00000170 96 9d 15 c9 81 8e 74 8f 81 34 28 2e 48 34 8a 3f ......t..4(.H4.? 00:22:29.466 00000180 58 37 cc 59 78 42 ce 7d 6a 12 26 dd 02 59 49 e7 X7.YxB.}j.&..YI. 00:22:29.466 00000190 2f be 62 d7 b1 29 97 e8 a7 d3 8c e0 f3 f0 d6 42 /.b..).........B 00:22:29.466 000001a0 ee 8a 38 c0 5a 30 d1 0c 54 14 9a 9a be b4 d2 51 ..8.Z0..T......Q 00:22:29.466 000001b0 6d 01 29 fb 8b 63 df 3b 0b c4 5b e5 5d 25 79 7e m.)..c.;..[.]%y~ 00:22:29.466 000001c0 6f 25 54 fd b8 08 7e 71 1d 1d fd 21 37 0e 30 8c o%T...~q...!7.0. 00:22:29.466 000001d0 9c 0f 07 19 4c c8 0c 74 22 26 67 81 c6 48 5a 33 ....L..t"&g..HZ3 00:22:29.466 000001e0 53 84 33 b5 f2 59 da 06 22 59 f6 72 ec 2e f5 9e S.3..Y.."Y.r.... 00:22:29.466 000001f0 a8 af bb 9b c1 43 c5 8c 9e 03 d3 0e cb ff 18 a2 .....C.......... 00:22:29.466 00000200 14 3c 0b a1 07 0e 3b 70 1f 30 09 f8 ad d6 ee f4 .<....;p.0...... 00:22:29.466 00000210 08 6e 8c bd 3f 57 ca 6d a5 2e e3 4a 8f ff b8 c8 .n..?W.m...J.... 00:22:29.466 00000220 2a 6a e2 74 cd 2c 1f 70 52 56 35 9e c1 4a 07 d4 *j.t.,.pRV5..J.. 00:22:29.466 00000230 ad b1 2c f1 df 7b b0 43 cd 44 5d 37 85 59 11 fb ..,..{.C.D]7.Y.. 00:22:29.466 00000240 1c fc 63 79 2e 02 27 19 61 f9 7f 1c fd 89 58 14 ..cy..'.a.....X. 00:22:29.466 00000250 6d 2a 52 44 18 b8 61 b1 29 a7 62 06 10 07 b3 9b m*RD..a.).b..... 00:22:29.466 00000260 80 69 9c 53 a2 39 39 9f 0d 2d d5 55 e1 06 38 fe .i.S.99..-.U..8. 00:22:29.466 00000270 1b 05 20 cd f6 0b de 46 31 87 ff 08 e7 46 a6 38 .. ....F1....F.8 00:22:29.466 00000280 48 97 75 0a 6b b6 4b ac ca ad 06 78 45 77 a3 dd H.u.k.K....xEw.. 00:22:29.466 00000290 fc af bc 48 10 24 92 41 e1 07 b3 8d 72 3f e5 a5 ...H.$.A....r?.. 00:22:29.466 000002a0 6b 43 3a 47 93 1b 64 5a 0d e0 a9 28 4e 1f de 33 kC:G..dZ...(N..3 00:22:29.466 000002b0 d2 2c 33 fc 40 40 5a 27 45 55 41 b9 0d 72 f5 43 .,3.@@Z'EUA..r.C 00:22:29.466 000002c0 74 e9 f7 b0 6a cf 3f 93 fc 01 a1 8c 53 32 ee 64 t...j.?.....S2.d 00:22:29.466 000002d0 fc 58 0c 50 2a 1c c0 36 31 10 b9 b7 24 b2 77 90 .X.P*..61...$.w. 00:22:29.466 000002e0 67 cf 26 2b f9 79 fc 57 0e 0b a6 1b de ab 03 f1 g.&+.y.W........ 00:22:29.466 000002f0 59 8b 89 aa 2f 63 0e e2 b0 6d 2a c0 e2 21 02 0e Y.../c...m*..!.. 00:22:29.466 host pubkey: 00:22:29.466 00000000 09 b3 fb a3 b6 cc 2f c9 50 ef e4 76 db 46 54 64 ....../.P..v.FTd 00:22:29.466 00000010 f4 e0 6e 30 d1 3b 23 f7 aa 7d 79 58 cf 07 dc 88 ..n0.;#..}yX.... 00:22:29.466 00000020 43 2e f2 ca 9d 38 14 ec 26 92 9b 45 63 01 ef bf C....8..&..Ec... 00:22:29.466 00000030 b8 15 e2 23 dd 90 c2 3a a0 1e a7 7b 40 69 7d d7 ...#...:...{@i}. 00:22:29.466 00000040 87 ec 6e ea 02 f5 c0 4c 23 b2 41 fb 10 88 e8 93 ..n....L#.A..... 00:22:29.466 00000050 38 32 d5 56 d7 2f 7d e9 d8 7a 29 55 c6 5f 9f 4e 82.V./}..z)U._.N 00:22:29.466 00000060 cc 33 08 4e 50 03 0c 49 6f 50 10 4a c1 aa d3 91 .3.NP..IoP.J.... 00:22:29.466 00000070 fc 4d 04 c8 1c de 87 eb ac 5e e5 9a 4a 3f 35 2b .M.......^..J?5+ 00:22:29.467 00000080 b5 05 41 a9 81 9e a5 24 cf 5f f3 56 dd 55 0f da ..A....$._.V.U.. 00:22:29.467 00000090 22 22 41 10 44 b2 8b 6f f4 0b b7 8b 5b ac 3c a3 ""A.D..o....[.<. 00:22:29.467 000000a0 d4 b9 5b e7 4c 4b f5 3c 91 22 a5 d2 52 15 92 a1 ..[.LK.<."..R... 00:22:29.467 000000b0 32 dd d7 a0 ca 5f 32 79 cc 4e 29 b0 79 df b6 8b 2...._2y.N).y... 00:22:29.467 000000c0 6b 77 d6 8a 95 b3 32 17 d3 13 e0 59 0a dc 87 b2 kw....2....Y.... 00:22:29.467 000000d0 25 b9 04 52 3f a6 aa 47 57 d0 0e 05 0c 82 a5 31 %..R?..GW......1 00:22:29.467 000000e0 82 0b f2 9f 33 8c 7c db ec f5 81 64 62 3c d5 cb ....3.|....db<.. 00:22:29.467 000000f0 a3 fc 2b 34 36 e2 a5 41 e0 5c 53 cf 6b 37 ff 76 ..+46..A.\S.k7.v 00:22:29.467 00000100 5a 07 78 0c 66 ec a6 da 7d 89 06 c7 62 de b1 01 Z.x.f...}...b... 00:22:29.467 00000110 66 e0 f9 c5 ee d7 b5 f5 11 be 59 b7 76 83 3b d3 f.........Y.v.;. 00:22:29.467 00000120 d3 8a 68 c5 1d 86 fc 0b b9 a5 29 72 a5 f9 f7 fc ..h.......)r.... 00:22:29.467 00000130 d4 57 8a ca f2 ee 70 af 16 01 70 32 94 08 7b 46 .W....p...p2..{F 00:22:29.467 00000140 55 40 80 bb 73 d7 ea 23 b0 de f3 3a 56 77 69 1c U@..s..#...:Vwi. 00:22:29.467 00000150 af 8a ae d1 76 a1 f4 9b 28 0d c4 b2 ec eb 83 3c ....v...(......< 00:22:29.467 00000160 0d f4 f3 a5 7f cb f2 1e f7 33 80 7f 7d a2 a5 07 .........3..}... 00:22:29.467 00000170 4a 94 fd d9 fb 18 db 45 f9 e5 66 d9 80 f3 6d 3a J......E..f...m: 00:22:29.467 00000180 0f ef c2 0f 79 8c 09 c8 0c 09 44 ac e0 c1 12 6e ....y.....D....n 00:22:29.467 00000190 fe 5f 64 16 e9 d3 0a 21 c5 52 3a b2 b6 c6 2c cf ._d....!.R:...,. 00:22:29.467 000001a0 e5 00 50 1c ee a2 0f ca 0f ba 78 e2 23 a3 cc d7 ..P.......x.#... 00:22:29.467 000001b0 34 1b cf 0f ad 5f 37 45 88 36 2b 2d 4b ca 19 cd 4...._7E.6+-K... 00:22:29.467 000001c0 c2 77 4f 7f b5 30 a8 1b 85 9c 4a 2d 08 d2 05 7c .wO..0....J-...| 00:22:29.467 000001d0 16 dc 78 cb f9 39 2d 75 1d 52 34 e5 72 3f 42 e7 ..x..9-u.R4.r?B. 00:22:29.467 000001e0 d0 3d 3f 77 b3 de 4e 94 a8 f9 36 07 ca 73 fa 90 .=?w..N...6..s.. 00:22:29.467 000001f0 7e b6 58 95 45 d2 3c 07 f2 4e 20 bc 69 a1 e5 27 ~.X.E.<..N .i..' 00:22:29.467 00000200 5f 5b 7e bf 56 5f 73 8d 9c c1 8c 9a f0 9b 0d 4a _[~.V_s........J 00:22:29.467 00000210 ed c1 c3 9d fb 05 9c 13 fe ff 28 1f b6 5e aa 52 ..........(..^.R 00:22:29.467 00000220 64 84 8b 7a 55 89 66 66 7b 94 29 d4 a8 84 63 0b d..zU.ff{.)...c. 00:22:29.467 00000230 6e f1 39 ad 72 08 3d 1a ea 94 5c ba 30 47 4d 1b n.9.r.=...\.0GM. 00:22:29.467 00000240 41 45 42 2e 71 ce 01 17 3c d0 20 e3 09 39 b0 b9 AEB.q...<. ..9.. 00:22:29.467 00000250 be d4 b3 4d f6 84 44 26 5a 12 0d c0 79 c7 e5 5a ...M..D&Z...y..Z 00:22:29.467 00000260 de 97 cc 8f dc 14 ba da f0 a8 1b 46 7e a1 44 b0 ...........F~.D. 00:22:29.467 00000270 85 dd fa 2b 28 eb 7a 32 83 77 ec 66 db 6a ee e7 ...+(.z2.w.f.j.. 00:22:29.467 00000280 4b cc 9a 58 ec ec 71 b6 5e 80 52 c0 57 b7 dc 28 K..X..q.^.R.W..( 00:22:29.467 00000290 b1 64 77 07 55 70 ab 41 ff 6f b7 f4 79 b9 e2 d6 .dw.Up.A.o..y... 00:22:29.467 000002a0 ca 83 29 d1 69 af 33 ad ae 68 77 72 d5 ab e7 f7 ..).i.3..hwr.... 00:22:29.467 000002b0 af 39 96 71 57 e6 26 f7 b1 16 6a 9e 04 10 5b 58 .9.qW.&...j...[X 00:22:29.467 000002c0 37 95 1d 23 63 4e 2e 5b 75 1a f5 f9 8a 67 ea 32 7..#cN.[u....g.2 00:22:29.467 000002d0 88 13 40 4b 91 df bd 24 3d f1 b9 93 79 9d d5 38 ..@K...$=...y..8 00:22:29.467 000002e0 0c d9 3b 72 7e 1e d8 dd 8a b7 75 83 4d 07 45 b6 ..;r~.....u.M.E. 00:22:29.467 000002f0 ca 8e e7 2f 8f d4 a8 48 6b 29 2f 47 b1 5e 21 35 .../...Hk)/G.^!5 00:22:29.467 dh secret: 00:22:29.467 00000000 e1 86 8a 73 a8 69 5f ed 27 9b f0 2e 8e 4c cd 5c ...s.i_.'....L.\ 00:22:29.467 00000010 dc f3 b3 ed c3 f2 91 ec ab ff 34 b4 a2 e7 da 6f ..........4....o 00:22:29.467 00000020 ec ad 8e bc b2 74 d7 a8 58 2a 9d 04 3f 9c d0 2f .....t..X*..?../ 00:22:29.467 00000030 f6 45 b8 1a 5c ad 9a ee de fb 61 6a b6 a4 56 50 .E..\.....aj..VP 00:22:29.467 00000040 e3 1e 5c 6a d7 0e eb af 67 ab 7f 9a 6e 8c a6 13 ..\j....g...n... 00:22:29.467 00000050 33 a6 07 39 da 5b 77 93 c0 19 71 20 d0 25 c4 82 3..9.[w...q .%.. 00:22:29.467 00000060 7d 13 7a 04 c5 bb 21 8d 24 3c fc 83 92 cf 14 14 }.z...!.$<...... 00:22:29.467 00000070 14 a6 8e 96 a4 d6 80 e4 65 9b 80 d4 47 3c 2e c5 ........e...G<.. 00:22:29.467 00000080 32 1b f3 db 68 02 24 d9 77 18 9a f9 04 86 91 0a 2...h.$.w....... 00:22:29.467 00000090 22 93 9e da 93 3d 7d 92 26 a2 3c 64 24 be 30 80 "....=}.&..x4lYV.. 00:22:29.467 00000100 c7 13 c2 8f 1e 32 f6 1b a1 9d 73 35 62 17 f1 1b .....2....s5b... 00:22:29.467 00000110 c7 7b 9b bd 18 9f 0d a5 be 15 5f c3 40 ac 0b ba .{........_.@... 00:22:29.467 00000120 d0 a5 1a e3 5c 26 4b 1d 6d a4 c6 32 4e c1 71 a2 ....\&K.m..2N.q. 00:22:29.467 00000130 24 c2 fd d8 ad 43 6a 63 29 b7 22 5d 47 13 3c d1 $....Cjc)."]G.<. 00:22:29.467 00000140 11 02 9a c3 a9 f8 78 2b a5 66 8f 7b 47 43 cf d1 ......x+.f.{GC.. 00:22:29.467 00000150 ad 52 da 39 68 57 80 03 69 ce 7d 91 f5 f9 c0 f8 .R.9hW..i.}..... 00:22:29.467 00000160 09 1c 7e 7f 9a 54 fc 56 4c 94 5c 54 de ac 28 53 ..~..T.VL.\T..(S 00:22:29.467 00000170 a9 b6 5b 6b 25 b6 55 16 de 57 1c 13 04 ea 10 9c ..[k%.U..W...... 00:22:29.467 00000180 8a 7d 3a 54 a8 e9 26 b6 6c db 96 ce 02 45 e9 90 .}:T..&.l....E.. 00:22:29.467 00000190 c7 93 9c b8 b2 a5 4d 7e a0 92 79 d2 05 e1 82 b1 ......M~..y..... 00:22:29.467 000001a0 ab f0 0f 29 3a 03 76 12 7e 45 a8 72 75 74 71 c7 ...):.v.~E.rutq. 00:22:29.467 000001b0 1f ad b2 a5 c6 c6 18 42 24 d6 b5 dc fb a3 a3 e2 .......B$....... 00:22:29.467 000001c0 2b 31 dd aa d9 3f f2 d3 da c0 08 e9 a2 ab cc 2e +1...?.......... 00:22:29.467 000001d0 65 c6 f8 ff ae 86 10 43 0f bb 74 17 29 cb 05 82 e......C..t.)... 00:22:29.467 000001e0 b5 f6 7a fc 6d 27 e2 6f d6 3b a4 bd 5d f3 70 a4 ..z.m'.o.;..].p. 00:22:29.467 000001f0 a2 68 bb 3f 45 d8 3b 35 5b 8d 9a fe cb 30 f6 d9 .h.?E.;5[....0.. 00:22:29.467 00000200 ec b0 32 a9 9c 1b 87 c1 7c cd bf 2c 2d ed 80 ff ..2.....|..,-... 00:22:29.467 00000210 11 3c f3 2d d1 1b b8 87 ab fb d2 e1 e1 6e 0b 89 .<.-.........n.. 00:22:29.467 00000220 a4 bc 8e e2 5b c1 40 53 f1 f2 59 9a 37 a0 b3 bd ....[.@S..Y.7... 00:22:29.467 00000230 ca d3 6b 77 58 71 bc f1 98 aa 74 ad b5 97 9e 08 ..kwXq....t..... 00:22:29.467 00000240 2f 42 d8 46 54 ea 19 b7 0a d6 6c 68 d2 01 6f 1b /B.FT.....lh..o. 00:22:29.467 00000250 d2 4c 88 3a 9c c7 bc 16 48 7b 96 4b 7f 01 a2 81 .L.:....H{.K.... 00:22:29.467 00000260 b9 69 b4 10 cf 84 48 cf 9e 9b bd 98 fb eb 61 4e .i....H.......aN 00:22:29.467 00000270 7e 7a 36 16 c5 ca f0 f3 ff bd d6 4a be 4e 5b 10 ~z6........J.N[. 00:22:29.467 00000280 ed 97 0c f9 ca 1b 2f fb 85 e5 0b 9f d5 56 6c 41 ....../......VlA 00:22:29.467 00000290 6f 19 42 25 da 65 11 5f 93 4e b5 16 1c 7c 95 9b o.B%.e._.N...|.. 00:22:29.467 000002a0 f3 e9 6e 23 db 98 63 89 ea 33 91 84 a2 dc e3 63 ..n#..c..3.....c 00:22:29.467 000002b0 d4 76 aa e2 83 a8 a0 90 b5 e9 29 01 0a ed f8 2c .v........)...., 00:22:29.467 000002c0 ef f5 e6 10 72 f7 02 1f f7 5e 8c 2d a2 d7 d2 4a ....r....^.-...J 00:22:29.467 000002d0 99 d7 93 9f 47 84 cf c5 51 6d 1f 24 8c 39 1a fc ....G...Qm.$.9.. 00:22:29.467 000002e0 1b b4 8e 5a 5e cf f4 3e 9d 06 2e ee 4a ef 3a a8 ...Z^..>....J.:. 00:22:29.467 000002f0 ad a0 6d 7f ae 96 25 51 31 54 db 9b b5 9f f9 51 ..m...%Q1T.....Q 00:22:29.467 [2024-09-27 13:27:08.511083] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key4, hash=1, dhgroup=4, seq=3775755212, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.467 [2024-09-27 13:27:08.511450] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.467 [2024-09-27 13:27:08.564795] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.467 [2024-09-27 13:27:08.565369] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.467 [2024-09-27 13:27:08.565711] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.467 [2024-09-27 13:27:08.720822] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.467 [2024-09-27 13:27:08.721179] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.467 [2024-09-27 13:27:08.721410] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 5 (ffdhe8192) 00:22:29.467 [2024-09-27 13:27:08.721581] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.467 [2024-09-27 13:27:08.722026] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.467 ctrlr pubkey: 00:22:29.467 00000000 40 27 f2 da bc 6d 72 69 72 65 ef e1 a8 d1 32 c6 @'...mrire....2. 00:22:29.467 00000010 42 a5 ba 1a 7f f0 25 b5 0d b1 2d d7 4b 8c e1 d5 B.....%...-.K... 00:22:29.467 00000020 c8 b5 1b d2 df 5d 8a 11 4a 9d d9 20 c2 16 9e 12 .....]..J.. .... 00:22:29.467 00000030 62 ea 9e b9 a9 b9 dd 3d 93 de 17 5e 22 d4 89 ff b......=...^"... 00:22:29.467 00000040 a9 29 2f 69 9d c7 cc 15 f7 bf e8 c4 ca 38 00 ca .)/i.........8.. 00:22:29.467 00000050 ab 68 f8 fd 69 0c 47 74 75 29 22 de 9e 0a 24 d2 .h..i.Gtu)"...$. 00:22:29.467 00000060 fd 1b 6d 9b 9e 31 5f bc 4c 51 65 9d 4d 11 a1 73 ..m..1_.LQe.M..s 00:22:29.467 00000070 43 89 d3 cc 68 c2 a0 c7 57 af ed ba 1c af 19 e4 C...h...W....... 00:22:29.467 00000080 50 9f ed c1 2c 6b 57 1a 78 3e b9 b6 88 e6 7c 1a P...,kW.x>....|. 00:22:29.467 00000090 08 18 2e 45 15 e0 d5 c0 f1 ba 70 3d 74 73 05 d9 ...E......p=ts.. 00:22:29.467 000000a0 33 07 4d ec ee ac f2 c8 c9 e3 5c d7 48 30 e4 3d 3.M.......\.H0.= 00:22:29.467 000000b0 0f ec 94 be 0c e9 d1 9b a5 1c 89 09 2a 5a 33 05 ............*Z3. 00:22:29.467 000000c0 1b 3c 12 ea c9 95 2c 86 c8 e1 0a 2c 8f bc e7 00 .<....,....,.... 00:22:29.467 000000d0 c9 78 e7 82 7b aa b5 af f5 23 0e 26 b4 a2 c7 df .x..{....#.&.... 00:22:29.467 000000e0 71 5e 90 c4 91 70 59 38 50 7b 8d 49 2e 82 02 5e q^...pY8P{.I...^ 00:22:29.467 000000f0 31 b4 99 1f 47 b1 66 20 28 e8 97 67 48 08 8f b6 1...G.f (..gH... 00:22:29.467 00000100 88 ce a8 4e b0 86 38 d8 63 01 5c 86 13 7c 1e 8a ...N..8.c.\..|.. 00:22:29.467 00000110 8a 36 5c 5c 3a d4 40 cf e9 92 e5 33 4b 9d ab c0 .6\\:.@....3K... 00:22:29.467 00000120 9a a7 78 fa 1b 62 6d be d3 a7 f3 02 3e 41 8c ea ..x..bm.....>A.. 00:22:29.467 00000130 1f 02 53 96 9c 76 76 1f 80 1c 03 bc dd 10 ba fc ..S..vv......... 00:22:29.467 00000140 50 18 b5 49 19 ea d2 27 4f 66 a7 b3 18 1d d7 8f P..I...'Of...... 00:22:29.467 00000150 98 63 1a 94 f7 b1 f1 16 d5 22 45 1d 67 4f f5 70 .c......."E.gO.p 00:22:29.467 00000160 e9 91 19 4a b5 96 0c f1 09 42 2b 72 3d 78 2b 2a ...J.....B+r=x+* 00:22:29.467 00000170 3b c4 62 09 32 ba 69 80 47 5f 74 9a cd 23 d1 50 ;.b.2.i.G_t..#.P 00:22:29.467 00000180 d5 5e 6d 1c ba 2e 76 31 98 6b 42 76 e8 95 a1 b2 .^m...v1.kBv.... 00:22:29.467 00000190 e3 62 18 6e 21 f7 de 84 17 84 47 15 34 6c b0 c4 .b.n!.....G.4l.. 00:22:29.467 000001a0 41 54 17 d9 d1 98 69 d8 89 5e 6b 27 d8 22 23 17 AT....i..^k'."#. 00:22:29.467 000001b0 e2 7d 7d ed 35 27 8e 76 e4 af 8b fe a7 5e 72 fb .}}.5'.v.....^r. 00:22:29.467 000001c0 29 5e 35 60 ef 46 c8 e2 c7 26 f7 81 c3 bd 30 6e )^5`.F...&....0n 00:22:29.467 000001d0 7c 01 9a d6 99 ce 7d f0 08 83 3c 75 f7 56 03 95 |.....}...bm.96...9.. 00:22:29.468 00000280 b7 7e 4f 27 c8 29 cf 4e 70 dc 81 97 3b a8 f9 81 .~O'.).Np...;... 00:22:29.468 00000290 35 29 5f 7d 57 41 6f 78 5f 3a 5e dd 80 be e7 b9 5)_}WAox_:^..... 00:22:29.468 000002a0 d6 75 1d d1 36 90 c8 2a e5 e9 19 2b d9 4d cd 3a .u..6..*...+.M.: 00:22:29.468 000002b0 a2 6b 25 02 43 9b 4c f1 85 07 d6 ff c0 55 6d 91 .k%.C.L......Um. 00:22:29.468 000002c0 ac 2a 37 18 6d 0d c6 3b ba 71 76 84 4f d4 05 73 .*7.m..;.qv.O..s 00:22:29.468 000002d0 91 d3 b7 ce 78 1e 6f cd 25 23 9c 30 6d db ce 0e ....x.o.%#.0m... 00:22:29.468 000002e0 58 9e 27 b2 a8 7c af de b6 2c 6e 05 6e d4 07 76 X.'..|...,n.n..v 00:22:29.468 000002f0 eb 6b 3b 8c 36 d8 03 c9 4f 64 69 d6 5e be 21 de .k;.6...Odi.^.!. 00:22:29.468 00000300 8f 6d c8 0b 8e 36 e4 55 28 26 93 61 a2 25 ef 6b .m...6.U(&.a.%.k 00:22:29.468 00000310 07 9b c2 cd 35 24 99 49 93 3b 63 7f ab 16 9e 32 ....5$.I.;c....2 00:22:29.468 00000320 72 c3 bf 98 06 53 fd 5f e2 d5 d1 ab 1a 81 36 f3 r....S._......6. 00:22:29.468 00000330 fe 4e 7f 34 65 1b 15 85 c5 1e 09 e3 28 fb 4d dc .N.4e.......(.M. 00:22:29.468 00000340 cd cc cc 3e 1b 07 b9 42 1e bb 15 16 4e d8 20 e6 ...>...B....N. . 00:22:29.468 00000350 2a 2c ca 97 0b 9c 86 17 55 f7 17 46 c4 15 e7 26 *,......U..F...& 00:22:29.468 00000360 2e 30 97 0d ba f7 96 65 f8 c6 8b 7a 9b 84 17 0f .0.....e...z.... 00:22:29.468 00000370 1f 18 39 09 49 1c 72 72 40 18 c4 61 c2 b9 61 f4 ..9.I.rr@..a..a. 00:22:29.468 00000380 bf 11 c1 a5 b3 24 5a dd 5a c3 5e 75 0c 2a 1e d2 .....$Z.Z.^u.*.. 00:22:29.468 00000390 5d 08 d2 dd d1 8c 72 91 74 c1 e7 49 2d f3 f7 6a ].....r.t..I-..j 00:22:29.468 000003a0 40 67 90 3a 50 07 d3 f0 a5 d2 bf 04 07 46 68 0a @g.:P........Fh. 00:22:29.468 000003b0 11 cb cf be 62 06 ad db 11 c6 4c 45 c9 14 b8 97 ....b.....LE.... 00:22:29.468 000003c0 ff 0c f2 16 91 ac f7 eb b6 9c 66 40 ae e4 0e 86 ..........f@.... 00:22:29.468 000003d0 13 6a be 42 91 8f ba f9 87 1c 0b 2e 9c 93 72 44 .j.B..........rD 00:22:29.468 000003e0 16 4f dd dc 4b 0e aa 90 00 73 46 2e 1f ba ae c9 .O..K....sF..... 00:22:29.468 000003f0 b4 c6 16 c7 45 49 2f 27 f8 e0 1f f8 0f 25 82 7e ....EI/'.....%.~ 00:22:29.468 host pubkey: 00:22:29.468 00000000 a3 1b 42 e6 48 75 b2 9d 5f 06 c7 44 40 18 02 38 ..B.Hu.._..D@..8 00:22:29.468 00000010 b4 d8 de 1a d7 a7 5d fa 8a 6c 8e bd 52 ab b4 db ......]..l..R... 00:22:29.468 00000020 66 46 f6 07 74 62 27 90 76 65 a5 71 4e a1 0f 0a fF..tb'.ve.qN... 00:22:29.468 00000030 0b a0 8b d2 cf fc 30 45 ba 1f e1 cf b5 98 a6 21 ......0E.......! 00:22:29.468 00000040 5b 7e b9 4b 22 dd 51 1e 42 18 5d 5a eb 70 3c 35 [~.K".Q.B.]Z.p<5 00:22:29.468 00000050 17 aa 72 20 75 73 e5 b9 c0 c7 f4 31 4f 51 e4 40 ..r us.....1OQ.@ 00:22:29.468 00000060 f7 d3 6e c3 7c d3 fb 38 6f 00 a8 98 f8 1d f2 5d ..n.|..8o......] 00:22:29.468 00000070 a2 3d 6a f1 2c cd 53 c2 07 67 ef b0 5b ac d1 35 .=j.,.S..g..[..5 00:22:29.468 00000080 07 bd 58 34 c4 2e 2c c6 7f bc cd bc 1d 4d bf 71 ..X4..,......M.q 00:22:29.468 00000090 a2 99 eb f3 24 7c 9f b6 ab b1 83 ee 1a db 99 19 ....$|.......... 00:22:29.468 000000a0 3e c4 4e 34 7a df f8 99 2c a1 a0 e1 a5 f0 a9 ff >.N4z...,....... 00:22:29.468 000000b0 b6 43 97 fa 47 b8 c9 52 4b d5 04 e2 09 0f 42 53 .C..G..RK.....BS 00:22:29.468 000000c0 46 92 53 d3 95 78 5a 4a ce 55 6a 5d ed f7 04 ae F.S..xZJ.Uj].... 00:22:29.468 000000d0 0e 42 c9 4c 9e 3c 11 0a 8f bd ab b5 21 13 05 05 .B.L.<......!... 00:22:29.468 000000e0 97 5b 59 59 a4 98 0c 24 f9 17 1d 34 e0 45 c8 99 .[YY...$...4.E.. 00:22:29.468 000000f0 87 6e ac b8 f5 53 d9 ef 47 1d 23 bb a5 46 cd 64 .n...S..G.#..F.d 00:22:29.468 00000100 a8 b5 7a fe 41 ba 17 d2 9d f2 2e 2d e9 2c d5 22 ..z.A......-.,." 00:22:29.468 00000110 f8 45 4a d5 c5 9e a0 8b bb 76 63 75 1d 91 2f 10 .EJ......vcu../. 00:22:29.468 00000120 07 80 df a1 80 e8 e8 1c 96 fc ef 74 0c a6 aa 62 ...........t...b 00:22:29.468 00000130 5b 87 c6 00 69 25 fe cd f1 e3 61 96 bf ea 73 7a [...i%....a...sz 00:22:29.468 00000140 54 8b 21 ec f6 24 6c 68 77 8e 72 b9 89 eb c4 1d T.!..$lhw.r..... 00:22:29.468 00000150 c6 70 67 bf 7d 0c 20 c9 9a 3a 7b a6 39 de 14 47 .pg.}. ..:{.9..G 00:22:29.468 00000160 b0 e6 a0 43 56 99 73 3a 5b 35 de 7e 52 eb 40 78 ...CV.s:[5.~R.@x 00:22:29.468 00000170 3f df fa 9d 9a 44 16 0a f1 76 e1 5c e9 91 0d 20 ?....D...v.\... 00:22:29.468 00000180 39 28 62 2a f6 7d 25 8d e7 5e c2 4e 8c 78 b3 50 9(b*.}%..^.N.x.P 00:22:29.468 00000190 90 11 9a 44 67 75 b0 75 5f 09 2b f5 e8 69 70 a3 ...Dgu.u_.+..ip. 00:22:29.468 000001a0 af b5 64 cc e2 bd 6d 06 22 31 c9 b2 e1 8d 73 87 ..d...m."1....s. 00:22:29.468 000001b0 d9 17 c8 a2 32 80 2b 56 95 e0 23 ba cd 78 20 32 ....2.+V..#..x 2 00:22:29.468 000001c0 24 ec 82 61 aa 5b 16 73 83 dd f1 76 45 60 27 06 $..a.[.s...vE`'. 00:22:29.468 000001d0 2d 9d 80 4e 1f 06 75 5f 8f 54 21 17 3b 69 ff 7f -..N..u_.T!.;i.. 00:22:29.468 000001e0 06 76 49 76 7b 5f 15 a9 72 08 67 0d f4 9e 10 87 .vIv{_..r.g..... 00:22:29.468 000001f0 50 ea 8b c7 cb 0e 9e cb 60 32 60 96 07 eb 24 ab P.......`2`...$. 00:22:29.468 00000200 7b 9b 57 09 19 19 c9 0f 86 30 5e 4a d7 7e 66 a8 {.W......0^J.~f. 00:22:29.468 00000210 67 80 5a 82 04 24 65 28 52 cf 9c 11 69 16 4b 42 g.Z..$e(R...i.KB 00:22:29.468 00000220 5a 9d 72 76 0b ce d0 86 9e be 61 10 1d de b1 2f Z.rv......a..../ 00:22:29.468 00000230 da f3 ac cf 71 ec e6 e8 3d b3 2f c1 8f 6b f0 0f ....q...=./..k.. 00:22:29.468 00000240 33 24 fa e0 7c bb 1e 66 32 75 8a cb c6 2e c6 0b 3$..|..f2u...... 00:22:29.468 00000250 d9 d7 7e 96 c5 8c 9b ba 9f 8f e9 c1 b2 09 51 db ..~...........Q. 00:22:29.468 00000260 f9 bf a7 e7 b0 5e ae 28 eb fc 63 39 11 cf b4 46 .....^.(..c9...F 00:22:29.468 00000270 a7 9f c6 17 09 22 98 d1 57 a8 66 68 48 35 25 30 ....."..W.fhH5%0 00:22:29.468 00000280 d9 4a 4c 98 84 1f 57 bc 32 f5 cb 16 22 de b3 e7 .JL...W.2..."... 00:22:29.468 00000290 81 bd d1 0d f5 33 77 5a e6 89 e8 c1 99 fb 0c 47 .....3wZ.......G 00:22:29.468 000002a0 67 03 8d 70 b0 43 ba 0d f8 e1 46 83 c0 d0 97 de g..p.C....F..... 00:22:29.468 000002b0 f3 ab 3c 32 d0 d9 2d d5 32 9f 30 b8 88 1d 24 ec ..<2..-.2.0...$. 00:22:29.468 000002c0 6f 36 fa 16 0d 4c 28 9d f9 7d 15 5d a0 62 6a ad o6...L(..}.].bj. 00:22:29.468 000002d0 9f f1 04 f8 a2 df a6 02 4b bb bd e8 b7 03 8a 0d ........K....... 00:22:29.468 000002e0 bb 55 9f e7 13 f7 67 9b e3 fd 45 30 81 49 e4 a4 .U....g...E0.I.. 00:22:29.468 000002f0 58 8e 3f 29 49 3b eb 65 f6 f4 b5 1f 16 f6 e3 6c X.?)I;.e.......l 00:22:29.468 00000300 10 e9 a0 a8 c3 36 7a be e6 65 36 b1 88 84 d2 aa .....6z..e6..... 00:22:29.468 00000310 c7 d1 5c ff cc e5 49 cc b3 f8 27 57 69 12 8a 13 ..\...I...'Wi... 00:22:29.468 00000320 2b b7 8a ef 39 bc 59 4e 5e 18 3b f3 47 e0 66 07 +...9.YN^.;.G.f. 00:22:29.468 00000330 35 d3 ef 99 43 d8 ec cc c2 e8 97 29 57 4b c9 04 5...C......)WK.. 00:22:29.468 00000340 4b e8 e6 ad 95 2a 71 91 b0 d4 f6 8a 84 a2 c0 51 K....*q........Q 00:22:29.468 00000350 90 c0 75 9c 6a 17 29 6c a6 bd 99 4f cc 69 1f 2e ..u.j.)l...O.i.. 00:22:29.468 00000360 8e 9f 6f 43 4d 41 57 0c 18 db 05 8b 52 8d 5f 51 ..oCMAW.....R._Q 00:22:29.468 00000370 76 47 3b b0 cd f9 35 dd a5 cd 47 c0 63 75 72 70 vG;...5...G.curp 00:22:29.468 00000380 1e 72 7d ff 9a cf d6 ad f1 01 bc c2 dd 3a 9b e9 .r}..........:.. 00:22:29.468 00000390 d4 44 f6 2f 37 1f 47 97 3a b7 ff e4 aa eb d1 a5 .D./7.G.:....... 00:22:29.468 000003a0 00 ea 53 37 47 6c 55 d6 ca 55 e8 f4 97 3c a6 e9 ..S7GlU..U...<.. 00:22:29.468 000003b0 7f 58 ee a4 aa a5 98 5b 3b 64 36 01 8b a5 ae 67 .X.....[;d6....g 00:22:29.468 000003c0 52 bf be 9f 71 ef 71 20 b1 e4 85 3c ca 43 b9 09 R...q.q ...<.C.. 00:22:29.468 000003d0 ad 9f 2e 7f 95 9c 8c aa 46 1a 41 1e e7 7e 61 aa ........F.A..~a. 00:22:29.468 000003e0 51 8b 87 37 18 d2 20 b1 92 1c 53 c1 f4 63 02 9e Q..7.. ...S..c.. 00:22:29.468 000003f0 fa a4 cf c5 0b d1 0e 89 e4 a6 e0 6e 56 a8 70 c8 ...........nV.p. 00:22:29.468 dh secret: 00:22:29.468 00000000 48 94 37 57 5a 71 7b 12 b0 d7 44 14 dd 4e b7 7b H.7WZq{...D..N.{ 00:22:29.468 00000010 0a 0c a9 4d 32 21 aa 46 47 5b f1 f0 35 66 c4 63 ...M2!.FG[..5f.c 00:22:29.468 00000020 35 72 50 f4 1d 07 89 8f 49 d3 a1 0f c7 a6 7e 75 5rP.....I.....~u 00:22:29.468 00000030 33 32 a6 60 36 e8 d8 0d 08 5c 86 2c 71 48 64 a2 32.`6....\.,qHd. 00:22:29.468 00000040 60 e8 68 c9 7b 55 ea 93 9f c1 b8 1b 52 fc 12 02 `.h.{U......R... 00:22:29.468 00000050 0f 60 b9 ee e6 e3 9b 44 34 95 70 5b 8a 73 67 ce .`.....D4.p[.sg. 00:22:29.468 00000060 2e ce 97 9e 1e e6 b5 b2 20 37 01 95 c4 88 58 cd ........ 7....X. 00:22:29.468 00000070 49 53 64 26 bb e2 2e f2 81 e8 be 79 6b a2 a4 14 ISd&.......yk... 00:22:29.468 00000080 e4 58 e0 21 df 26 78 69 74 14 b6 b0 bf f8 62 9a .X.!.&xit.....b. 00:22:29.468 00000090 e8 4a 5b a2 3d 2a 50 4d 5c 58 d4 3f a0 e4 bd d2 .J[.=*PM\X.?.... 00:22:29.468 000000a0 77 03 73 af 3b 40 07 b5 80 2b 97 c6 e4 bd 7d d4 w.s.;@...+....}. 00:22:29.468 000000b0 a2 39 8e 85 6e b1 d1 28 d3 6b ff 45 76 a5 49 78 .9..n..(.k.Ev.Ix 00:22:29.468 000000c0 49 52 aa 35 08 11 19 5d b4 96 1b 9e a4 c7 57 5c IR.5...]......W\ 00:22:29.468 000000d0 8f 5d 9a 97 2d 0d 25 e7 67 77 b1 55 d1 d6 0d df .]..-.%.gw.U.... 00:22:29.468 000000e0 b3 cd af 42 a6 f8 f1 e8 a7 d4 f4 b4 f0 94 5e 38 ...B..........^8 00:22:29.468 000000f0 a3 d9 5b 8c 49 e2 dc 8d 88 27 13 1b 5c c9 63 da ..[.I....'..\.c. 00:22:29.468 00000100 86 0c 5d e8 d5 c4 25 97 86 31 88 b0 4b 2f d9 f7 ..]...%..1..K/.. 00:22:29.468 00000110 a6 89 47 45 9c d0 72 b3 f9 d0 96 f1 38 c5 8c f5 ..GE..r.....8... 00:22:29.468 00000120 ff 46 d8 c2 00 af 6d 5b 1b 66 52 55 5e f0 de 82 .F....m[.fRU^... 00:22:29.468 00000130 e5 da f9 53 81 5a a4 b4 a3 72 aa db 74 6e 61 a7 ...S.Z...r..tna. 00:22:29.468 00000140 fd 45 54 51 03 01 cc 97 34 b9 1e 9c da a2 67 89 .ETQ....4.....g. 00:22:29.468 00000150 10 49 9f c2 da df 72 47 41 cc ff 4b 74 ce 88 3b .I....rGA..Kt..; 00:22:29.468 00000160 ac 14 5f 72 8b 78 56 79 e9 cc f5 5a b5 4d c4 22 .._r.xVy...Z.M." 00:22:29.468 00000170 52 f3 8a f1 40 c4 37 ea 3b 6e 86 01 0c 48 be 25 R...@.7.;n...H.% 00:22:29.469 00000180 1a 0a ee 1b 47 d6 e5 a7 08 41 22 3d 89 93 21 b1 ....G....A"=..!. 00:22:29.469 00000190 73 95 5d 6d d6 9e fa e6 9b de 51 dc c2 21 37 4a s.]m......Q..!7J 00:22:29.469 000001a0 6b cf e3 6a e6 72 5f df 05 47 e0 72 39 d5 8c 85 k..j.r_..G.r9... 00:22:29.469 000001b0 88 a8 d8 5c 6e b4 c5 60 d8 ea e7 4b 76 6a fc 1f ...\n..`...Kvj.. 00:22:29.469 000001c0 b7 20 b2 3b d0 84 97 90 53 d1 55 8d 84 64 c8 75 . .;....S.U..d.u 00:22:29.469 000001d0 e8 4f 85 38 61 97 58 c3 b4 f0 b5 0f f3 c3 7a 9e .O.8a.X.......z. 00:22:29.469 000001e0 64 74 57 03 61 c6 8c ef 74 cc 17 42 fa 17 4d 81 dtW.a...t..B..M. 00:22:29.469 000001f0 a5 4b c3 0b 0f ea 0f 85 6f ad d5 61 5a d1 f1 7a .K......o..aZ..z 00:22:29.469 00000200 78 88 b0 3b ed c0 e6 c5 c6 f3 49 a2 ce 59 a2 79 x..;......I..Y.y 00:22:29.469 00000210 21 47 be cc 38 29 88 6f d3 17 6d 6a 0b ea 4b b9 !G..8).o..mj..K. 00:22:29.469 00000220 b0 bd ed 4a c5 71 9d de 3f 0d f1 7f 6e 6c 53 40 ...J.q..?...nlS@ 00:22:29.469 00000230 ca 94 3c af fd 9b 08 e1 e4 6d c0 d0 43 f3 c0 02 ..<......m..C... 00:22:29.469 00000240 f8 26 b4 a0 86 4d 17 f0 11 62 f9 7f 0a a7 65 db .&...M...b....e. 00:22:29.469 00000250 de a4 63 20 76 04 4e 89 68 93 5d 11 17 d0 6f 83 ..c v.N.h.]...o. 00:22:29.469 00000260 52 78 17 61 84 e5 db bd a8 e6 74 83 0f 7f 79 db Rx.a......t...y. 00:22:29.469 00000270 97 ae bb 26 67 31 36 ab 5e 7c ec 06 9c b6 9a 9b ...&g16.^|...... 00:22:29.469 00000280 96 09 95 73 e7 f4 25 4f d8 97 7c 59 6d a0 f9 6e ...s..%O..|Ym..n 00:22:29.469 00000290 61 8b 1b 62 4b c6 55 f6 f7 13 52 0b 4a 53 07 1f a..bK.U...R.JS.. 00:22:29.469 000002a0 71 0a d5 eb 8c f3 e3 d5 c2 5a 2f 93 82 c9 91 f1 q........Z/..... 00:22:29.469 000002b0 46 5c bd fc 4a f3 42 fa 5b 15 e3 84 59 79 fa 97 F\..J.B.[...Yy.. 00:22:29.469 000002c0 58 49 38 1c e4 d8 79 a8 08 e5 37 a3 3e dd 3c 53 XI8...y...7.>.....|. 00:22:29.469 00000090 08 18 2e 45 15 e0 d5 c0 f1 ba 70 3d 74 73 05 d9 ...E......p=ts.. 00:22:29.469 000000a0 33 07 4d ec ee ac f2 c8 c9 e3 5c d7 48 30 e4 3d 3.M.......\.H0.= 00:22:29.469 000000b0 0f ec 94 be 0c e9 d1 9b a5 1c 89 09 2a 5a 33 05 ............*Z3. 00:22:29.469 000000c0 1b 3c 12 ea c9 95 2c 86 c8 e1 0a 2c 8f bc e7 00 .<....,....,.... 00:22:29.469 000000d0 c9 78 e7 82 7b aa b5 af f5 23 0e 26 b4 a2 c7 df .x..{....#.&.... 00:22:29.469 000000e0 71 5e 90 c4 91 70 59 38 50 7b 8d 49 2e 82 02 5e q^...pY8P{.I...^ 00:22:29.469 000000f0 31 b4 99 1f 47 b1 66 20 28 e8 97 67 48 08 8f b6 1...G.f (..gH... 00:22:29.469 00000100 88 ce a8 4e b0 86 38 d8 63 01 5c 86 13 7c 1e 8a ...N..8.c.\..|.. 00:22:29.469 00000110 8a 36 5c 5c 3a d4 40 cf e9 92 e5 33 4b 9d ab c0 .6\\:.@....3K... 00:22:29.469 00000120 9a a7 78 fa 1b 62 6d be d3 a7 f3 02 3e 41 8c ea ..x..bm.....>A.. 00:22:29.469 00000130 1f 02 53 96 9c 76 76 1f 80 1c 03 bc dd 10 ba fc ..S..vv......... 00:22:29.469 00000140 50 18 b5 49 19 ea d2 27 4f 66 a7 b3 18 1d d7 8f P..I...'Of...... 00:22:29.469 00000150 98 63 1a 94 f7 b1 f1 16 d5 22 45 1d 67 4f f5 70 .c......."E.gO.p 00:22:29.469 00000160 e9 91 19 4a b5 96 0c f1 09 42 2b 72 3d 78 2b 2a ...J.....B+r=x+* 00:22:29.469 00000170 3b c4 62 09 32 ba 69 80 47 5f 74 9a cd 23 d1 50 ;.b.2.i.G_t..#.P 00:22:29.469 00000180 d5 5e 6d 1c ba 2e 76 31 98 6b 42 76 e8 95 a1 b2 .^m...v1.kBv.... 00:22:29.469 00000190 e3 62 18 6e 21 f7 de 84 17 84 47 15 34 6c b0 c4 .b.n!.....G.4l.. 00:22:29.469 000001a0 41 54 17 d9 d1 98 69 d8 89 5e 6b 27 d8 22 23 17 AT....i..^k'."#. 00:22:29.469 000001b0 e2 7d 7d ed 35 27 8e 76 e4 af 8b fe a7 5e 72 fb .}}.5'.v.....^r. 00:22:29.469 000001c0 29 5e 35 60 ef 46 c8 e2 c7 26 f7 81 c3 bd 30 6e )^5`.F...&....0n 00:22:29.469 000001d0 7c 01 9a d6 99 ce 7d f0 08 83 3c 75 f7 56 03 95 |.....}...bm.96...9.. 00:22:29.469 00000280 b7 7e 4f 27 c8 29 cf 4e 70 dc 81 97 3b a8 f9 81 .~O'.).Np...;... 00:22:29.469 00000290 35 29 5f 7d 57 41 6f 78 5f 3a 5e dd 80 be e7 b9 5)_}WAox_:^..... 00:22:29.469 000002a0 d6 75 1d d1 36 90 c8 2a e5 e9 19 2b d9 4d cd 3a .u..6..*...+.M.: 00:22:29.469 000002b0 a2 6b 25 02 43 9b 4c f1 85 07 d6 ff c0 55 6d 91 .k%.C.L......Um. 00:22:29.469 000002c0 ac 2a 37 18 6d 0d c6 3b ba 71 76 84 4f d4 05 73 .*7.m..;.qv.O..s 00:22:29.469 000002d0 91 d3 b7 ce 78 1e 6f cd 25 23 9c 30 6d db ce 0e ....x.o.%#.0m... 00:22:29.469 000002e0 58 9e 27 b2 a8 7c af de b6 2c 6e 05 6e d4 07 76 X.'..|...,n.n..v 00:22:29.469 000002f0 eb 6b 3b 8c 36 d8 03 c9 4f 64 69 d6 5e be 21 de .k;.6...Odi.^.!. 00:22:29.469 00000300 8f 6d c8 0b 8e 36 e4 55 28 26 93 61 a2 25 ef 6b .m...6.U(&.a.%.k 00:22:29.469 00000310 07 9b c2 cd 35 24 99 49 93 3b 63 7f ab 16 9e 32 ....5$.I.;c....2 00:22:29.469 00000320 72 c3 bf 98 06 53 fd 5f e2 d5 d1 ab 1a 81 36 f3 r....S._......6. 00:22:29.469 00000330 fe 4e 7f 34 65 1b 15 85 c5 1e 09 e3 28 fb 4d dc .N.4e.......(.M. 00:22:29.469 00000340 cd cc cc 3e 1b 07 b9 42 1e bb 15 16 4e d8 20 e6 ...>...B....N. . 00:22:29.469 00000350 2a 2c ca 97 0b 9c 86 17 55 f7 17 46 c4 15 e7 26 *,......U..F...& 00:22:29.469 00000360 2e 30 97 0d ba f7 96 65 f8 c6 8b 7a 9b 84 17 0f .0.....e...z.... 00:22:29.469 00000370 1f 18 39 09 49 1c 72 72 40 18 c4 61 c2 b9 61 f4 ..9.I.rr@..a..a. 00:22:29.469 00000380 bf 11 c1 a5 b3 24 5a dd 5a c3 5e 75 0c 2a 1e d2 .....$Z.Z.^u.*.. 00:22:29.469 00000390 5d 08 d2 dd d1 8c 72 91 74 c1 e7 49 2d f3 f7 6a ].....r.t..I-..j 00:22:29.469 000003a0 40 67 90 3a 50 07 d3 f0 a5 d2 bf 04 07 46 68 0a @g.:P........Fh. 00:22:29.469 000003b0 11 cb cf be 62 06 ad db 11 c6 4c 45 c9 14 b8 97 ....b.....LE.... 00:22:29.469 000003c0 ff 0c f2 16 91 ac f7 eb b6 9c 66 40 ae e4 0e 86 ..........f@.... 00:22:29.469 000003d0 13 6a be 42 91 8f ba f9 87 1c 0b 2e 9c 93 72 44 .j.B..........rD 00:22:29.469 000003e0 16 4f dd dc 4b 0e aa 90 00 73 46 2e 1f ba ae c9 .O..K....sF..... 00:22:29.469 000003f0 b4 c6 16 c7 45 49 2f 27 f8 e0 1f f8 0f 25 82 7e ....EI/'.....%.~ 00:22:29.469 host pubkey: 00:22:29.469 00000000 72 0f da 99 13 e9 10 a5 2f fe 88 b6 20 a1 c5 45 r......./... ..E 00:22:29.469 00000010 b2 89 78 fb 51 cd 84 97 35 e2 8c 74 39 c0 0d 3a ..x.Q...5..t9..: 00:22:29.469 00000020 63 fa d1 19 6e 5a 1c fd af a4 9f e4 02 25 b2 05 c...nZ.......%.. 00:22:29.469 00000030 fc ca 34 65 0c 7d 53 e2 81 ba 43 f2 8e 84 da 72 ..4e.}S...C....r 00:22:29.469 00000040 ce 52 e4 f1 89 92 3d d6 d8 e9 62 b9 5b ad 30 41 .R....=...b.[.0A 00:22:29.469 00000050 f2 28 1a 34 cb c5 b8 56 33 86 82 9a 4f e5 30 b4 .(.4...V3...O.0. 00:22:29.469 00000060 c3 73 07 12 26 b7 18 9b 35 a3 87 c4 de d9 13 02 .s..&...5....... 00:22:29.469 00000070 98 93 8e e8 1e 85 c8 a2 51 7a db 5c 87 48 6a 29 ........Qz.\.Hj) 00:22:29.469 00000080 07 81 4a 98 f5 8b b6 df ba 76 db 6a a6 58 76 8b ..J......v.j.Xv. 00:22:29.469 00000090 ac 17 ac 4b 51 2e ca a4 2f 8b fa 26 56 77 71 a7 ...KQ.../..&Vwq. 00:22:29.469 000000a0 47 f7 35 4d c8 bc 87 5f 15 9d 83 fa e3 55 33 03 G.5M..._.....U3. 00:22:29.469 000000b0 ed eb cf dc a3 20 bf a6 2a 70 be 3c 32 f1 4f f7 ..... ..*p.<2.O. 00:22:29.469 000000c0 93 0b 3f fa 88 d8 26 c4 2d 96 bb c0 c3 5a 30 00 ..?...&.-....Z0. 00:22:29.469 000000d0 da 9b 67 ab fc e0 66 b9 c1 06 bc 57 84 a8 83 35 ..g...f....W...5 00:22:29.469 000000e0 e9 74 2d 0f 8b e6 a7 3a 78 a6 72 19 e0 fc 77 19 .t-....:x.r...w. 00:22:29.469 000000f0 cd 3c 37 26 f4 ad 0a ee 83 29 06 8f 39 28 ca 9b .<7&.....)..9(.. 00:22:29.470 00000100 38 fd d7 34 83 1e 9d 93 a7 a0 46 e9 d6 38 c0 d8 8..4......F..8.. 00:22:29.470 00000110 7e ab 3c 5f 55 25 55 55 95 83 51 cf d0 0e be ed ~.<_U%UU..Q..... 00:22:29.470 00000120 2e f5 d0 18 47 e7 2a 6a 1c 47 2f 76 c6 4b b3 81 ....G.*j.G/v.K.. 00:22:29.470 00000130 7b 81 4c 90 9d ef fd 6b 26 57 76 bd 93 20 9b b8 {.L....k&Wv.. .. 00:22:29.470 00000140 49 70 bb 55 24 ea f2 20 0b 9b b4 94 0e e9 24 32 Ip.U$.. ......$2 00:22:29.470 00000150 15 7a 55 81 f8 01 24 ad 19 a6 46 1f 9c df 86 a5 .zU...$...F..... 00:22:29.470 00000160 7b 7f d5 57 a8 7c 3c d5 3f eb 5b 3e 7a 23 e2 9d {..W.|<.?.[>z#.. 00:22:29.470 00000170 47 0c c9 2a 13 54 48 66 48 29 e8 98 35 d9 7b 0b G..*.THfH)..5.{. 00:22:29.470 00000180 b3 c6 c6 90 b5 bb f4 c7 3b 36 4b 2b 89 ae e9 d4 ........;6K+.... 00:22:29.470 00000190 df 74 a8 b7 d4 7b 5c 1e 4c 74 93 6f ae 44 56 a0 .t...{\.Lt.o.DV. 00:22:29.470 000001a0 fa 9d 6a 33 06 1c 47 1f 95 61 94 96 6a 60 11 4c ..j3..G..a..j`.L 00:22:29.470 000001b0 4a 82 1a 27 4f a4 a2 b5 a5 71 61 a1 62 6b cd 96 J..'O....qa.bk.. 00:22:29.470 000001c0 7e 4f 36 0b 24 20 dc 36 ca 9f 51 9c c7 ac 52 22 ~O6.$ .6..Q...R" 00:22:29.470 000001d0 60 0a d6 b3 71 23 7e 05 6b f7 09 10 b7 ab 5d 51 `...q#~.k.....]Q 00:22:29.470 000001e0 07 f4 d1 b9 d7 aa 65 db 37 f9 ab a2 0a ea 93 e6 ......e.7....... 00:22:29.470 000001f0 c9 ca e8 58 fa 77 d9 89 a6 87 57 3b 79 3c da 89 ...X.w....W;y<.. 00:22:29.470 00000200 a5 51 73 a6 0a 10 03 c8 b0 88 2e 43 c8 fc 9b 2a .Qs........C...* 00:22:29.470 00000210 a8 81 cb 15 3f 57 44 d6 06 3b c0 87 ca 32 8c 76 ....?WD..;...2.v 00:22:29.470 00000220 e0 3f 10 a9 53 2e 83 fe 53 84 9b ce 6e b7 62 87 .?..S...S...n.b. 00:22:29.470 00000230 d2 03 9e 20 f5 b8 b6 d8 e4 d3 e3 6d e5 60 17 ea ... .......m.`.. 00:22:29.470 00000240 ad f3 85 a3 44 7b 84 f5 9a 47 ea 22 4a 9f 79 ff ....D{...G."J.y. 00:22:29.470 00000250 47 d2 01 a7 ae f9 3b d0 f4 78 21 99 a8 3c 1a b3 G.....;..x!..<.. 00:22:29.470 00000260 07 cd 2a 5d de b4 65 5e e4 0a 91 cb 87 a8 b4 a7 ..*]..e^........ 00:22:29.470 00000270 74 0f d7 4a 20 a5 af ed 27 e8 4a a9 63 1d d8 0f t..J ...'.J.c... 00:22:29.470 00000280 49 c6 11 03 93 43 c5 19 07 ed 02 81 89 36 d5 fb I....C.......6.. 00:22:29.470 00000290 36 d3 f6 cc 49 ec 93 1d 77 4f 75 9a 51 6f 33 d9 6...I...wOu.Qo3. 00:22:29.470 000002a0 c4 51 bc 29 d6 1f b4 ea 46 4c 3e f8 15 ac 9e 5f .Q.)....FL>...._ 00:22:29.470 000002b0 a9 07 74 3c b3 6a 47 fe 15 60 cb be a2 ae 22 a5 ..t<.jG..`....". 00:22:29.470 000002c0 0a 58 b9 8f b7 51 0a b0 7f dc f5 32 52 0a 9b e5 .X...Q.....2R... 00:22:29.470 000002d0 21 f6 e7 ad fa 07 e1 cd 31 07 f8 f6 b5 5d 4f 20 !.......1....]O 00:22:29.470 000002e0 a2 92 2e cb aa f4 12 fd 53 e9 05 59 b4 5c 18 e8 ........S..Y.\.. 00:22:29.470 000002f0 0f 11 0d 0b 49 bc af 4a 81 92 02 fb ac 52 9e 3e ....I..J.....R.> 00:22:29.470 00000300 98 9c 77 29 46 b0 29 ab 35 a0 a8 b8 45 2e 72 15 ..w)F.).5...E.r. 00:22:29.470 00000310 27 f6 a1 2a 1c d6 b4 ca 1e c7 d2 7c e0 01 b2 24 '..*.......|...$ 00:22:29.470 00000320 cc a6 ac 72 1e 39 18 59 0c e4 8e 05 ba 9a 6c c5 ...r.9.Y......l. 00:22:29.470 00000330 b3 78 3f db dc 0e 30 14 82 73 a6 20 d7 2e c5 44 .x?...0..s. ...D 00:22:29.470 00000340 72 d6 df ff 36 71 2f 29 31 c3 c0 e0 5d 7e d1 0a r...6q/)1...]~.. 00:22:29.470 00000350 fb c4 56 ac 55 20 8d 64 23 8f 5f 69 f4 ed b5 f6 ..V.U .d#._i.... 00:22:29.470 00000360 58 7d 8d 42 35 3e b3 24 b5 b0 eb 76 91 9e a1 d3 X}.B5>.$...v.... 00:22:29.470 00000370 6b 29 a3 04 a9 f5 15 88 45 e1 52 2a c1 94 f8 24 k)......E.R*...$ 00:22:29.470 00000380 18 60 29 e5 49 33 4e c0 95 0f db 0b f2 5f db 0c .`).I3N......_.. 00:22:29.470 00000390 ee c8 1a 5e b6 a4 7f a8 cc 76 6f 2d 45 ee 33 00 ...^.....vo-E.3. 00:22:29.470 000003a0 29 0c 49 b8 3e d7 51 6d ef 9b 5c 55 b6 30 11 35 ).I.>.Qm..\U.0.5 00:22:29.470 000003b0 2b ec 23 4b 01 44 51 06 b7 5f 6a 76 87 b3 47 3e +.#K.DQ.._jv..G> 00:22:29.470 000003c0 62 09 a3 d8 9d 18 80 68 92 1a 45 67 53 40 2b 80 b......h..EgS@+. 00:22:29.470 000003d0 76 5a 5a a5 9e 9a d2 0f 80 29 d7 3f 01 c1 0b b5 vZZ......).?.... 00:22:29.470 000003e0 67 12 32 4b 44 bc 9a 5d d1 5e e4 fa bd 78 e0 bb g.2KD..].^...x.. 00:22:29.470 000003f0 be 01 12 6d 57 bf 4c 6a ac ae 0a f3 b2 48 99 12 ...mW.Lj.....H.. 00:22:29.470 dh secret: 00:22:29.470 00000000 eb 4c 2e 18 d1 bb ff cf c8 c2 d5 b6 67 dc c6 76 .L..........g..v 00:22:29.470 00000010 71 e3 db 57 2c f9 3a fe 11 4c 23 0d 07 ff fa 74 q..W,.:..L#....t 00:22:29.470 00000020 eb e1 b0 b3 70 f2 96 8b 17 c4 23 44 f9 f8 f6 af ....p.....#D.... 00:22:29.470 00000030 ce b9 1e 25 57 92 d1 88 66 e6 8d e9 11 24 a1 27 ...%W...f....$.' 00:22:29.470 00000040 09 cc c9 1a c0 5f bf 80 31 17 ca a8 d6 10 38 15 ....._..1.....8. 00:22:29.470 00000050 c2 42 8a 80 54 47 65 00 9f 50 1a 12 ea 3c 3e b2 .B..TGe..P...<>. 00:22:29.470 00000060 25 7f 9e 7a 92 bb b8 30 65 cd 45 77 69 87 83 82 %..z...0e.Ewi... 00:22:29.470 00000070 be 99 29 b6 b1 22 af 6b fe 3c 19 c4 37 f2 04 e1 ..)..".k.<..7... 00:22:29.470 00000080 3a c9 76 e8 0e ec 3c 4b eb 79 24 a5 b2 af 0e 4f :.v...).....'.j. 00:22:29.470 00000240 b4 95 5f 2c ff 35 68 0e 05 1d 58 35 3a d5 19 10 .._,.5h...X5:... 00:22:29.470 00000250 13 51 a5 0f d6 30 c4 92 91 43 a9 9d 4b 42 3d db .Q...0...C..KB=. 00:22:29.470 00000260 26 f6 de 47 f6 47 c6 2e a7 99 6c 04 c2 bf a6 b0 &..G.G....l..... 00:22:29.470 00000270 d6 03 7b c8 f8 bf 34 0e c9 ae fe 94 de 57 9c 9c ..{...4......W.. 00:22:29.470 00000280 bd d7 a8 92 d3 19 5c fc 19 1c 56 2a c6 1e 3a be ......\...V*..:. 00:22:29.470 00000290 aa 9c 83 d6 ce ae 54 d4 7a d4 7c be 24 f6 39 df ......T.z.|.$.9. 00:22:29.470 000002a0 c3 56 96 00 05 d9 3c 65 e8 4c a3 ea 17 c7 15 48 .V.......... 00:22:29.471 00000010 80 14 83 aa 79 2a 4b e2 ed f8 9c 49 1d 88 56 56 ....y*K....I..VV 00:22:29.471 00000020 9c b4 4b 73 20 33 91 ec 84 f2 f3 1b cf 07 6d 81 ..Ks 3........m. 00:22:29.471 00000030 c7 9b 95 e3 ee 32 da 9c e0 07 e4 93 7e b7 68 c6 .....2......~.h. 00:22:29.471 00000040 f7 fa 67 8b f2 14 7b 7d 0b 42 61 8a 87 49 1f d7 ..g...{}.Ba..I.. 00:22:29.471 00000050 28 f9 d0 dd 39 ec ac 26 dc 6d cd 82 47 bd ac 14 (...9..&.m..G... 00:22:29.471 00000060 b4 5f 47 40 d7 11 50 43 ea d2 af 95 47 5a 33 13 ._G@..PC....GZ3. 00:22:29.471 00000070 22 7f b0 2e e8 9c 81 c8 94 9e 7a 8d 10 c1 b4 eb ".........z..... 00:22:29.471 00000080 6b 20 2e 41 48 e6 e4 b6 cf 4f b8 91 81 0e 3a eb k .AH....O....:. 00:22:29.471 00000090 9b ed 02 92 a0 be bf e1 22 c5 27 28 25 e0 31 fd ........".'(%.1. 00:22:29.471 000000a0 1b 3d ee 36 80 78 3b 20 35 9c 34 ad fe 01 3b 7b .=.6.x; 5.4...;{ 00:22:29.471 000000b0 1c ea d0 be b1 9d 82 e7 d0 e6 ea 41 8d 5e 9d b8 ...........A.^.. 00:22:29.471 000000c0 8b 79 1c f9 73 50 f8 a3 3d 21 7d b9 07 2a 40 ba .y..sP..=!}..*@. 00:22:29.471 000000d0 95 b7 bc 4b 22 b9 23 08 82 67 09 9d 92 28 06 17 ...K".#..g...(.. 00:22:29.471 000000e0 d7 d5 e0 f2 ab e6 f6 68 20 2e bc 5b 86 04 1b 6f .......h ..[...o 00:22:29.471 000000f0 15 a1 88 62 85 e3 72 85 6b 70 0e 1d af 58 f6 0a ...b..r.kp...X.. 00:22:29.471 00000100 86 dd b2 2b 8b fb 97 21 53 31 6b 69 b5 2e 92 49 ...+...!S1ki...I 00:22:29.471 00000110 d1 d1 03 f5 ed f8 e7 76 87 85 7d 58 8d 2f 85 80 .......v..}X./.. 00:22:29.471 00000120 d6 35 31 a8 f3 e3 1a 89 a1 1d 22 f3 6c 81 21 36 .51.......".l.!6 00:22:29.471 00000130 80 08 fa 48 10 1c e4 c5 ca cc 2f 67 65 d3 da 8e ...H....../ge... 00:22:29.471 00000140 a2 6b 5b 30 98 16 7a 7f 9f c4 d3 a7 a7 f1 7e 66 .k[0..z.......~f 00:22:29.471 00000150 69 17 54 5f 8f bf 74 b2 33 45 12 21 83 68 bb 1b i.T_..t.3E.!.h.. 00:22:29.471 00000160 42 5a 40 72 19 b0 d0 7c 9d 01 3a d5 86 7b 4c d3 BZ@r...|..:..{L. 00:22:29.471 00000170 5d bd 4c 2b eb 15 49 9c a3 2f e3 67 53 cd cf 4f ].L+..I../.gS..O 00:22:29.471 00000180 7a d9 05 24 98 b1 08 01 1f 9b a5 b1 14 e7 39 23 z..$..........9# 00:22:29.471 00000190 14 9c 8f 4f 3d 95 22 9b 17 99 10 75 e4 ba 52 bd ...O=."....u..R. 00:22:29.471 000001a0 92 b4 2e 6a fb 93 b7 4c 6c 21 24 8b 44 d5 af 47 ...j...Ll!$.D..G 00:22:29.471 000001b0 d7 b5 d5 c0 ae 0e ff f7 d9 f8 10 d3 2e 12 98 44 ...............D 00:22:29.471 000001c0 b7 c6 49 e9 3e a0 c4 43 3d 0a 69 e0 9b 15 83 c9 ..I.>..C=.i..... 00:22:29.471 000001d0 93 b5 2d 89 dd f2 97 24 51 bd 41 74 4c 9a 5e a8 ..-....$Q.AtL.^. 00:22:29.471 000001e0 4d 6d c2 dc 83 b4 65 28 54 5e db 4f fe e8 a1 f0 Mm....e(T^.O.... 00:22:29.471 000001f0 39 59 fd 99 ed 47 24 14 fc 76 18 65 18 3b 45 78 9Y...G$..v.e.;Ex 00:22:29.471 00000200 d6 de 73 eb 51 4f 4c b6 8f c4 03 43 02 65 34 49 ..s.QOL....C.e4I 00:22:29.471 00000210 d8 c8 2e 4e 63 e2 2f 23 91 88 7d 83 41 69 02 89 ...Nc./#..}.Ai.. 00:22:29.471 00000220 06 03 84 7e 54 78 63 0c 5a 9f 6f 3c 76 c7 3e a5 ...~Txc.Z.o. 00:22:29.471 00000230 00 8d d6 fd e3 55 3d af 19 72 eb 0a a0 77 bc e5 .....U=..r...w.. 00:22:29.471 00000240 cd 0b cf f0 40 6b 97 58 81 34 37 c0 27 92 ab 42 ....@k.X.47.'..B 00:22:29.471 00000250 62 9a e9 c3 59 d2 7e de 6b cf ae ed 9f d4 77 67 b...Y.~.k.....wg 00:22:29.471 00000260 9b 45 64 62 4f 56 f2 0a 7c 97 f3 ea 93 f7 5e 4f .EdbOV..|.....^O 00:22:29.471 00000270 b9 4d f4 ab a0 95 cd 96 2a 57 6b 80 54 ac f1 45 .M......*Wk.T..E 00:22:29.471 00000280 e8 17 3b 1e b0 d3 7a 08 9a 0e fb e8 4c f0 c4 01 ..;...z.....L... 00:22:29.471 00000290 91 90 f3 3c 1e 63 d4 a4 a6 e5 7d 94 c1 56 89 ab ...<.c....}..V.. 00:22:29.471 000002a0 3a 42 6c ab b6 07 a1 72 1f 3c 96 f4 eb d5 ee 8f :Bl....r.<...... 00:22:29.471 000002b0 a9 39 cc 69 2d 4d 8d f9 3d 60 c9 7b 40 29 96 df .9.i-M..=`.{@).. 00:22:29.471 000002c0 cc 77 33 4c 3e 49 2e 78 22 47 0b d9 ff 8f c6 07 .w3L>I.x"G...... 00:22:29.471 000002d0 3c e5 a3 bd af bb 53 8d 49 22 87 11 93 a5 3f a4 <.....S.I"....?. 00:22:29.471 000002e0 81 6f b5 b4 3a 52 27 39 ed 15 62 7f be b2 f7 6d .o..:R'9..b....m 00:22:29.471 000002f0 fb 72 c6 91 04 90 0d 83 01 64 2e 65 a1 a2 02 81 .r.......d.e.... 00:22:29.471 00000300 01 90 6b 72 27 fd 22 a3 bf fc 59 40 bf 7d e0 1e ..kr'."...Y@.}.. 00:22:29.471 00000310 0b bc d2 8f 2d 31 af 33 f8 c8 bd 54 f2 c6 50 51 ....-1.3...T..PQ 00:22:29.471 00000320 8c 65 1f 28 c1 de 93 b6 04 97 83 f1 5e a2 32 01 .e.(........^.2. 00:22:29.471 00000330 7b 54 1b 9d ca e4 ee 47 40 32 00 7b c6 4f 85 7b {T.....G@2.{.O.{ 00:22:29.471 00000340 47 c6 39 65 b8 97 17 13 dd 39 70 8d 4e 87 45 b3 G.9e.....9p.N.E. 00:22:29.471 00000350 f0 bc f1 a7 8e d9 81 cf 75 c4 90 42 d0 39 3a fc ........u..B.9:. 00:22:29.471 00000360 33 20 d7 58 c5 15 38 37 c4 d7 d4 8f 1e a8 ef 76 3 .X..87.......v 00:22:29.471 00000370 6d f2 e9 99 70 93 93 e9 a5 6f 29 4a f9 2a 90 c1 m...p....o)J.*.. 00:22:29.471 00000380 c7 68 60 a6 73 30 0f 6c 3a 3b b3 e4 d5 b1 f8 42 .h`.s0.l:;.....B 00:22:29.471 00000390 5d f5 17 d6 88 0c a6 4a ab 7b 7f 3c d0 8f 59 d9 ]......J.{.<..Y. 00:22:29.471 000003a0 79 32 16 44 df d3 c8 e1 b7 44 46 85 30 c7 35 62 y2.D.....DF.0.5b 00:22:29.471 000003b0 0c 48 5d 7f d1 a8 6f e8 6c 65 32 b4 6d 69 93 53 .H]...o.le2.mi.S 00:22:29.471 000003c0 11 e5 0b db 53 51 85 c3 44 f0 d2 f5 f9 81 70 30 ....SQ..D.....p0 00:22:29.471 000003d0 3c bd c0 0e 10 64 06 cb 12 52 78 c4 9b 60 79 12 <....d...Rx..`y. 00:22:29.471 000003e0 ba 1e 8b df 22 fc 77 02 06 c4 73 41 5f 41 23 ac ....".w...sA_A#. 00:22:29.471 000003f0 93 81 41 18 6a 8b 7b 02 64 09 2f 93 2b ee 67 07 ..A.j.{.d./.+.g. 00:22:29.471 host pubkey: 00:22:29.471 00000000 76 6a ca 10 9b 44 6d 5d ce 84 c6 99 c9 d3 a5 75 vj...Dm].......u 00:22:29.471 00000010 85 64 db 0d 23 14 10 90 03 16 69 5d ba b1 5f 25 .d..#.....i].._% 00:22:29.471 00000020 f6 76 14 93 7f d8 97 cb ca 2d de d1 82 39 d1 d9 .v.......-...9.. 00:22:29.471 00000030 cf 5a 33 68 a4 36 c6 45 c3 7e 92 18 a6 0e f4 c8 .Z3h.6.E.~...... 00:22:29.471 00000040 2e ea d1 3b 67 7b ff 8b e5 27 d8 3a db a2 d6 06 ...;g{...'.:.... 00:22:29.471 00000050 d3 06 ac 7b 34 2a a4 b8 74 95 f9 1a 8f 31 b4 bc ...{4*..t....1.. 00:22:29.471 00000060 36 e6 71 42 aa fe 47 d6 15 74 70 ef c5 e5 6f eb 6.qB..G..tp...o. 00:22:29.471 00000070 12 3f 3a 77 40 a0 1e 6b 7d 46 f7 77 ff ec fc 76 .?:w@..k}F.w...v 00:22:29.471 00000080 a8 36 be 3a 43 67 75 24 3c a1 9c d7 f4 f3 ff 09 .6.:Cgu$<....... 00:22:29.471 00000090 8b cb 4e 4e 34 59 9f 1e 27 37 cc 41 76 c2 fb da ..NN4Y..'7.Av... 00:22:29.471 000000a0 ac 0b d3 2d 0f 73 4f 15 df 35 8d 1f ae e0 88 9d ...-.sO..5...... 00:22:29.471 000000b0 ef 28 3b 7b d9 de 59 0c b0 2d 70 30 6b d0 69 99 .(;{..Y..-p0k.i. 00:22:29.471 000000c0 65 ef 72 8f 8f 6c 23 06 3b 79 53 06 72 d0 d1 e6 e.r..l#.;yS.r... 00:22:29.471 000000d0 09 3d 67 62 1c 8c 73 e2 50 3f 2c 04 13 65 f8 4f .=gb..s.P?,..e.O 00:22:29.471 000000e0 1d 73 dd f1 ab 1c 26 c3 0e 56 98 5a 9a f3 b2 2f .s....&..V.Z.../ 00:22:29.471 000000f0 02 65 90 e6 d3 b2 3e fb ba 94 d6 97 37 7d 56 6d .e....>.....7}Vm 00:22:29.471 00000100 c1 14 eb 50 41 77 ef bf f4 bc 1c dd 9f 5b 5b 89 ...PAw.......[[. 00:22:29.471 00000110 f8 de 9d aa 16 f0 b0 17 fc 28 47 62 9c 59 9b 08 .........(Gb.Y.. 00:22:29.471 00000120 da ca 32 ca 9b a6 63 e7 ea 87 e3 d7 05 37 c9 b8 ..2...c......7.. 00:22:29.471 00000130 34 a9 db 61 bc 5f 24 fb 69 06 53 fe 76 63 90 7b 4..a._$.i.S.vc.{ 00:22:29.471 00000140 21 87 3e 55 04 1f e0 7d 35 12 37 50 f9 29 7f 51 !.>U...}5.7P.).Q 00:22:29.471 00000150 82 67 e5 f6 9c 71 e3 9a 91 13 72 c4 0f f8 42 09 .g...q....r...B. 00:22:29.471 00000160 25 93 ea c4 da 48 8a 57 85 00 5b ff a0 39 37 e9 %....H.W..[..97. 00:22:29.471 00000170 73 be 90 b7 c1 e6 71 6e 1e b6 4c b4 03 01 4f e3 s.....qn..L...O. 00:22:29.471 00000180 e8 cf 80 e5 13 43 43 23 e8 a8 77 48 64 f4 05 c0 .....CC#..wHd... 00:22:29.471 00000190 0a 16 2d 43 81 40 c9 69 e2 a2 ab f8 04 10 fa 03 ..-C.@.i........ 00:22:29.471 000001a0 d9 e4 0a 3a 09 57 5d 2f 17 31 7a ea ee 25 3a 44 ...:.W]/.1z..%:D 00:22:29.471 000001b0 0b 71 68 d9 dd 5f 78 53 96 5e a6 ad 2a bd db 80 .qh.._xS.^..*... 00:22:29.471 000001c0 9f 9f d9 74 46 9a d7 e4 4c 20 0b 4e e4 f2 a1 71 ...tF...L .N...q 00:22:29.471 000001d0 0e c5 03 c0 b9 4b 09 7e e2 cb 14 5b 74 f3 94 5a .....K.~...[t..Z 00:22:29.471 000001e0 ae 73 5b 06 69 58 38 4e c0 df 5e 2e 37 b7 e6 13 .s[.iX8N..^.7... 00:22:29.471 000001f0 2f 7d 70 1f 53 89 d5 76 b8 4d ac c8 f1 17 1e 75 /}p.S..v.M.....u 00:22:29.471 00000200 6f 2f 7f 13 59 60 8f a9 7b fd 71 d8 1a 7d c3 8c o/..Y`..{.q..}.. 00:22:29.471 00000210 f9 85 f5 c8 5f de 86 74 e2 7a 04 d4 db 95 6c 33 ...._..t.z....l3 00:22:29.471 00000220 6b b6 8c 38 af 3f 8e 36 ae f1 1e 9b c1 44 8c 25 k..8.?.6.....D.% 00:22:29.471 00000230 f8 a2 bb ad ca f8 2e bf ea 3c fa 4e 49 32 8b e1 .........<.NI2.. 00:22:29.471 00000240 ae 9c 43 6d af 9b 4c 77 7e bf 6b 2e 2a 5a 46 a3 ..Cm..Lw~.k.*ZF. 00:22:29.471 00000250 0b ac 78 3c 1e 5d 82 03 93 2b 30 2b 3c b0 27 05 ..x<.]...+0+<.'. 00:22:29.471 00000260 c5 7f 2c 0e 0b 8b bf 63 1d 0b df c8 be 9a c6 4b ..,....c.......K 00:22:29.471 00000270 ba b0 71 9f ac 47 90 22 26 25 a5 cb 00 6b 36 f0 ..q..G."&%...k6. 00:22:29.471 00000280 ca ff 27 b3 95 99 e5 48 23 f3 5f e5 d4 ff e7 4b ..'....H#._....K 00:22:29.471 00000290 b6 5c ef 23 f9 34 c4 3c 9c 4e bd 2b 00 7e e1 eb .\.#.4.<.N.+.~.. 00:22:29.471 000002a0 c6 58 28 ec 43 c4 1f ff d7 db 76 d3 ca 1d 27 dc .X(.C.....v...'. 00:22:29.471 000002b0 b6 2d 16 18 c8 5a c4 4f c0 3a 54 2a cf 2d 2f b2 .-...Z.O.:T*.-/. 00:22:29.471 000002c0 d4 56 3b 1e fd a9 b3 d4 7c ce 9b f9 e5 47 0f 08 .V;.....|....G.. 00:22:29.471 000002d0 b0 08 66 5f 1d df 1d c3 41 e3 c9 49 c3 b8 48 67 ..f_....A..I..Hg 00:22:29.471 000002e0 c6 d0 6f 64 e9 51 b8 a8 9a d7 d5 85 77 b6 cc 0b ..od.Q......w... 00:22:29.471 000002f0 1e 2a 72 a5 70 ca 14 7e 2d e8 97 91 10 77 aa c9 .*r.p..~-....w.. 00:22:29.471 00000300 41 0b 2c 4c 0d 46 4d 9f 59 fd 14 c1 cd 50 91 2e A.,L.FM.Y....P.. 00:22:29.471 00000310 b4 58 cd a7 3b 76 10 e9 af 47 58 8b 09 dc bf a2 .X..;v...GX..... 00:22:29.471 00000320 4d 8c d2 91 b2 3d 86 40 5e 63 7c f4 b4 a0 10 98 M....=.@^c|..... 00:22:29.471 00000330 d7 8a 0a ee b3 4c 2c ac 1b a7 34 1e 09 ea 25 b5 .....L,...4...%. 00:22:29.471 00000340 50 ff 4a 19 f0 8d a2 29 13 43 9b 6e 5b 86 ad 4e P.J....).C.n[..N 00:22:29.471 00000350 6a b2 29 d0 ad ba 28 db 58 a5 54 87 45 be dd 37 j.)...(.X.T.E..7 00:22:29.471 00000360 93 5e 7e 9e b5 fb 16 e3 ee 73 bb b4 0d b7 f3 71 .^~......s.....q 00:22:29.471 00000370 e6 5d 20 b5 97 2a 5b 81 ff b7 e3 15 36 75 29 cb .] ..*[.....6u). 00:22:29.471 00000380 74 b5 9b 45 48 6c c3 fe b1 6e 02 bb 75 0e 3d 68 t..EHl...n..u.=h 00:22:29.471 00000390 d6 0b ae 54 2b 04 1a 91 88 7d a9 4d f2 38 e4 ea ...T+....}.M.8.. 00:22:29.471 000003a0 4a c0 39 0b 59 b5 06 8a 8c c1 88 cd 89 53 69 0b J.9.Y........Si. 00:22:29.471 000003b0 ee 32 48 60 9f 52 4c 8e 99 98 5d 7d 3a fa 89 67 .2H`.RL...]}:..g 00:22:29.471 000003c0 74 10 8b 36 75 f5 19 52 fa fe 4b 32 6f a4 e7 e8 t..6u..R..K2o... 00:22:29.471 000003d0 5d ca 17 98 01 3c 61 e6 ba a3 f3 92 3c 7d 67 42 ]....I.". 00:22:29.472 00000180 7c db 4d a4 5c 0a 93 ff f2 1b 43 91 ff d1 41 a4 |.M.\.....C...A. 00:22:29.472 00000190 92 8f c2 1a cd d0 f6 ea aa d4 aa c0 0c 6b d5 ae .............k.. 00:22:29.472 000001a0 f0 cc 9f 08 1f 6c b0 d0 52 ac a5 1d 0c 4a 47 90 .....l..R....JG. 00:22:29.472 000001b0 6a 96 3e 55 5c f8 72 28 04 2b 0f 2a cd 26 e2 03 j.>U\.r(.+.*.&.. 00:22:29.472 000001c0 56 4f a4 0a e3 49 d7 4e 33 21 b3 04 49 a6 26 49 VO...I.N3!..I.&I 00:22:29.472 000001d0 dd 28 fe cf 5a 09 73 45 00 1d 18 8a f4 f4 b4 23 .(..Z.sE.......# 00:22:29.472 000001e0 f8 8f 0b 22 62 2f 6d 95 a0 d4 16 be 8b 6e 4a 2d ..."b/m......nJ- 00:22:29.472 000001f0 02 76 e0 54 8f 4e 62 f7 bd 77 7d d5 cc 54 57 84 .v.T.Nb..w}..TW. 00:22:29.472 00000200 37 40 64 ef 17 f2 64 e9 db 09 7f ec 10 e7 94 b8 7@d...d......... 00:22:29.472 00000210 1b 4e 89 4d 51 6e 8d 93 a3 49 89 ee 07 06 0f 5f .N.MQn...I....._ 00:22:29.472 00000220 c3 19 39 c2 a0 8f 6e f5 69 5f 09 5f 94 36 07 14 ..9...n.i_._.6.. 00:22:29.472 00000230 aa 0c e7 07 b4 ba e5 5a 8d 5a ad 2e b8 d6 75 69 .......Z.Z....ui 00:22:29.472 00000240 f3 de 97 0d ff 0a c8 c8 39 61 8e 99 ab 08 db b0 ........9a...... 00:22:29.472 00000250 6e 4b 42 15 6b a4 82 54 bb c3 ab 7e 34 07 fa c6 nKB.k..T...~4... 00:22:29.472 00000260 a3 ef 49 ff 6d af 38 f0 f9 77 07 2b 91 89 d6 02 ..I.m.8..w.+.... 00:22:29.472 00000270 4a 22 14 f3 b1 58 2a 0b 67 32 50 3c 47 0c c7 ed J"...X*.g2P.S.K!.*K..kD.. 00:22:29.472 00000290 5c fe 16 9d 1f ad b8 ff 44 4a cd 2f 99 23 2d 03 \.......DJ./.#-. 00:22:29.472 000002a0 4c 69 96 eb 4a 2d cc 59 9c 78 1d a1 a2 65 03 38 Li..J-.Y.x...e.8 00:22:29.472 000002b0 0c 3b 39 84 47 3f 4c 41 64 7e ff 27 54 82 e1 a7 .;9.G?LAd~.'T... 00:22:29.472 000002c0 9c 64 41 91 14 fe b4 c5 d1 4d c1 a7 17 78 a3 52 .dA......M...x.R 00:22:29.472 000002d0 a0 17 6c 30 4b 8a 25 98 49 66 50 b4 b8 3c ce 74 ..l0K.%.IfP..<.t 00:22:29.472 000002e0 4b d1 d3 57 36 5a 45 4a dc 9c a6 2c 9a 05 36 8c K..W6ZEJ...,..6. 00:22:29.472 000002f0 79 95 a3 c3 da 48 31 ef b4 b7 4b f1 7b 66 98 a7 y....H1...K.{f.. 00:22:29.472 00000300 56 6b 7c 36 b3 04 95 d6 8d fb 9d f6 79 50 b2 97 Vk|6........yP.. 00:22:29.472 00000310 de 2f cd 20 e2 e8 81 94 4e 43 94 fb eb 4a 83 0e ./. ....NC...J.. 00:22:29.472 00000320 3a 3c aa fb d0 2e 0a c4 69 8f 03 c3 6a dc 9b 68 :<......i...j..h 00:22:29.472 00000330 eb a4 69 13 6e ca bf 70 63 58 ef 2b 88 1f db 9b ..i.n..pcX.+.... 00:22:29.472 00000340 1b fb 14 eb 4a 5d 6b 99 a6 2e 37 1c c4 fa c7 b4 ....J]k...7..... 00:22:29.472 00000350 97 f4 9a e8 48 6a ce 93 a9 ae 60 94 b0 75 aa dd ....Hj....`..u.. 00:22:29.472 00000360 40 54 6c 28 9b 97 86 44 da 2e 89 18 7e d1 46 9a @Tl(...D....~.F. 00:22:29.472 00000370 de 75 27 5a 58 a2 4a af a1 e1 04 37 25 16 1c d0 .u'ZX.J....7%... 00:22:29.472 00000380 71 09 1b 11 36 5d 87 d2 24 1a d7 0d 44 e9 a5 58 q...6]..$...D..X 00:22:29.472 00000390 3c 26 44 ed 9b e6 e1 02 56 06 5b d1 fa 7a 64 8e <&D.....V.[..zd. 00:22:29.472 000003a0 ff f8 86 fc ea 1c 6d a3 8b 29 a1 5f 3c 70 66 de ......m..)._...... 00:22:29.472 00000010 80 14 83 aa 79 2a 4b e2 ed f8 9c 49 1d 88 56 56 ....y*K....I..VV 00:22:29.472 00000020 9c b4 4b 73 20 33 91 ec 84 f2 f3 1b cf 07 6d 81 ..Ks 3........m. 00:22:29.472 00000030 c7 9b 95 e3 ee 32 da 9c e0 07 e4 93 7e b7 68 c6 .....2......~.h. 00:22:29.472 00000040 f7 fa 67 8b f2 14 7b 7d 0b 42 61 8a 87 49 1f d7 ..g...{}.Ba..I.. 00:22:29.472 00000050 28 f9 d0 dd 39 ec ac 26 dc 6d cd 82 47 bd ac 14 (...9..&.m..G... 00:22:29.472 00000060 b4 5f 47 40 d7 11 50 43 ea d2 af 95 47 5a 33 13 ._G@..PC....GZ3. 00:22:29.472 00000070 22 7f b0 2e e8 9c 81 c8 94 9e 7a 8d 10 c1 b4 eb ".........z..... 00:22:29.472 00000080 6b 20 2e 41 48 e6 e4 b6 cf 4f b8 91 81 0e 3a eb k .AH....O....:. 00:22:29.472 00000090 9b ed 02 92 a0 be bf e1 22 c5 27 28 25 e0 31 fd ........".'(%.1. 00:22:29.472 000000a0 1b 3d ee 36 80 78 3b 20 35 9c 34 ad fe 01 3b 7b .=.6.x; 5.4...;{ 00:22:29.472 000000b0 1c ea d0 be b1 9d 82 e7 d0 e6 ea 41 8d 5e 9d b8 ...........A.^.. 00:22:29.472 000000c0 8b 79 1c f9 73 50 f8 a3 3d 21 7d b9 07 2a 40 ba .y..sP..=!}..*@. 00:22:29.472 000000d0 95 b7 bc 4b 22 b9 23 08 82 67 09 9d 92 28 06 17 ...K".#..g...(.. 00:22:29.472 000000e0 d7 d5 e0 f2 ab e6 f6 68 20 2e bc 5b 86 04 1b 6f .......h ..[...o 00:22:29.472 000000f0 15 a1 88 62 85 e3 72 85 6b 70 0e 1d af 58 f6 0a ...b..r.kp...X.. 00:22:29.472 00000100 86 dd b2 2b 8b fb 97 21 53 31 6b 69 b5 2e 92 49 ...+...!S1ki...I 00:22:29.472 00000110 d1 d1 03 f5 ed f8 e7 76 87 85 7d 58 8d 2f 85 80 .......v..}X./.. 00:22:29.472 00000120 d6 35 31 a8 f3 e3 1a 89 a1 1d 22 f3 6c 81 21 36 .51.......".l.!6 00:22:29.472 00000130 80 08 fa 48 10 1c e4 c5 ca cc 2f 67 65 d3 da 8e ...H....../ge... 00:22:29.472 00000140 a2 6b 5b 30 98 16 7a 7f 9f c4 d3 a7 a7 f1 7e 66 .k[0..z.......~f 00:22:29.472 00000150 69 17 54 5f 8f bf 74 b2 33 45 12 21 83 68 bb 1b i.T_..t.3E.!.h.. 00:22:29.472 00000160 42 5a 40 72 19 b0 d0 7c 9d 01 3a d5 86 7b 4c d3 BZ@r...|..:..{L. 00:22:29.472 00000170 5d bd 4c 2b eb 15 49 9c a3 2f e3 67 53 cd cf 4f ].L+..I../.gS..O 00:22:29.472 00000180 7a d9 05 24 98 b1 08 01 1f 9b a5 b1 14 e7 39 23 z..$..........9# 00:22:29.472 00000190 14 9c 8f 4f 3d 95 22 9b 17 99 10 75 e4 ba 52 bd ...O=."....u..R. 00:22:29.472 000001a0 92 b4 2e 6a fb 93 b7 4c 6c 21 24 8b 44 d5 af 47 ...j...Ll!$.D..G 00:22:29.472 000001b0 d7 b5 d5 c0 ae 0e ff f7 d9 f8 10 d3 2e 12 98 44 ...............D 00:22:29.472 000001c0 b7 c6 49 e9 3e a0 c4 43 3d 0a 69 e0 9b 15 83 c9 ..I.>..C=.i..... 00:22:29.472 000001d0 93 b5 2d 89 dd f2 97 24 51 bd 41 74 4c 9a 5e a8 ..-....$Q.AtL.^. 00:22:29.472 000001e0 4d 6d c2 dc 83 b4 65 28 54 5e db 4f fe e8 a1 f0 Mm....e(T^.O.... 00:22:29.472 000001f0 39 59 fd 99 ed 47 24 14 fc 76 18 65 18 3b 45 78 9Y...G$..v.e.;Ex 00:22:29.472 00000200 d6 de 73 eb 51 4f 4c b6 8f c4 03 43 02 65 34 49 ..s.QOL....C.e4I 00:22:29.472 00000210 d8 c8 2e 4e 63 e2 2f 23 91 88 7d 83 41 69 02 89 ...Nc./#..}.Ai.. 00:22:29.472 00000220 06 03 84 7e 54 78 63 0c 5a 9f 6f 3c 76 c7 3e a5 ...~Txc.Z.o. 00:22:29.472 00000230 00 8d d6 fd e3 55 3d af 19 72 eb 0a a0 77 bc e5 .....U=..r...w.. 00:22:29.472 00000240 cd 0b cf f0 40 6b 97 58 81 34 37 c0 27 92 ab 42 ....@k.X.47.'..B 00:22:29.472 00000250 62 9a e9 c3 59 d2 7e de 6b cf ae ed 9f d4 77 67 b...Y.~.k.....wg 00:22:29.472 00000260 9b 45 64 62 4f 56 f2 0a 7c 97 f3 ea 93 f7 5e 4f .EdbOV..|.....^O 00:22:29.472 00000270 b9 4d f4 ab a0 95 cd 96 2a 57 6b 80 54 ac f1 45 .M......*Wk.T..E 00:22:29.472 00000280 e8 17 3b 1e b0 d3 7a 08 9a 0e fb e8 4c f0 c4 01 ..;...z.....L... 00:22:29.472 00000290 91 90 f3 3c 1e 63 d4 a4 a6 e5 7d 94 c1 56 89 ab ...<.c....}..V.. 00:22:29.472 000002a0 3a 42 6c ab b6 07 a1 72 1f 3c 96 f4 eb d5 ee 8f :Bl....r.<...... 00:22:29.472 000002b0 a9 39 cc 69 2d 4d 8d f9 3d 60 c9 7b 40 29 96 df .9.i-M..=`.{@).. 00:22:29.472 000002c0 cc 77 33 4c 3e 49 2e 78 22 47 0b d9 ff 8f c6 07 .w3L>I.x"G...... 00:22:29.472 000002d0 3c e5 a3 bd af bb 53 8d 49 22 87 11 93 a5 3f a4 <.....S.I"....?. 00:22:29.472 000002e0 81 6f b5 b4 3a 52 27 39 ed 15 62 7f be b2 f7 6d .o..:R'9..b....m 00:22:29.472 000002f0 fb 72 c6 91 04 90 0d 83 01 64 2e 65 a1 a2 02 81 .r.......d.e.... 00:22:29.472 00000300 01 90 6b 72 27 fd 22 a3 bf fc 59 40 bf 7d e0 1e ..kr'."...Y@.}.. 00:22:29.472 00000310 0b bc d2 8f 2d 31 af 33 f8 c8 bd 54 f2 c6 50 51 ....-1.3...T..PQ 00:22:29.472 00000320 8c 65 1f 28 c1 de 93 b6 04 97 83 f1 5e a2 32 01 .e.(........^.2. 00:22:29.472 00000330 7b 54 1b 9d ca e4 ee 47 40 32 00 7b c6 4f 85 7b {T.....G@2.{.O.{ 00:22:29.472 00000340 47 c6 39 65 b8 97 17 13 dd 39 70 8d 4e 87 45 b3 G.9e.....9p.N.E. 00:22:29.473 00000350 f0 bc f1 a7 8e d9 81 cf 75 c4 90 42 d0 39 3a fc ........u..B.9:. 00:22:29.473 00000360 33 20 d7 58 c5 15 38 37 c4 d7 d4 8f 1e a8 ef 76 3 .X..87.......v 00:22:29.473 00000370 6d f2 e9 99 70 93 93 e9 a5 6f 29 4a f9 2a 90 c1 m...p....o)J.*.. 00:22:29.473 00000380 c7 68 60 a6 73 30 0f 6c 3a 3b b3 e4 d5 b1 f8 42 .h`.s0.l:;.....B 00:22:29.473 00000390 5d f5 17 d6 88 0c a6 4a ab 7b 7f 3c d0 8f 59 d9 ]......J.{.<..Y. 00:22:29.473 000003a0 79 32 16 44 df d3 c8 e1 b7 44 46 85 30 c7 35 62 y2.D.....DF.0.5b 00:22:29.473 000003b0 0c 48 5d 7f d1 a8 6f e8 6c 65 32 b4 6d 69 93 53 .H]...o.le2.mi.S 00:22:29.473 000003c0 11 e5 0b db 53 51 85 c3 44 f0 d2 f5 f9 81 70 30 ....SQ..D.....p0 00:22:29.473 000003d0 3c bd c0 0e 10 64 06 cb 12 52 78 c4 9b 60 79 12 <....d...Rx..`y. 00:22:29.473 000003e0 ba 1e 8b df 22 fc 77 02 06 c4 73 41 5f 41 23 ac ....".w...sA_A#. 00:22:29.473 000003f0 93 81 41 18 6a 8b 7b 02 64 09 2f 93 2b ee 67 07 ..A.j.{.d./.+.g. 00:22:29.473 host pubkey: 00:22:29.473 00000000 c4 ba 4c 40 42 69 2e 0a 5e c9 fc 82 75 3b b4 fa ..L@Bi..^...u;.. 00:22:29.473 00000010 3d 38 3d 89 ad cc b6 99 1b d7 4f 13 21 de 9c d6 =8=.......O.!... 00:22:29.473 00000020 ae 85 5c a5 2d 95 f6 12 46 83 08 99 72 59 f0 bc ..\.-...F...rY.. 00:22:29.473 00000030 50 ca 1b 99 d0 0e df e9 2c b7 e2 17 3f f5 87 91 P.......,...?... 00:22:29.473 00000040 41 bf 47 d9 ef 9b 39 f8 be e7 df 4f 58 cd bc c1 A.G...9....OX... 00:22:29.473 00000050 85 79 fc 88 a6 8e 50 73 9d 56 19 27 40 14 a5 3c .y....Ps.V.'@..< 00:22:29.473 00000060 2f 73 1f e3 2d ee fd a7 9c d7 72 34 15 18 e6 f3 /s..-.....r4.... 00:22:29.473 00000070 27 76 e9 69 11 32 f8 40 7d 92 ba d1 11 b8 94 2f 'v.i.2.@}....../ 00:22:29.473 00000080 e3 57 cf f7 d4 ad de 2a e4 78 84 b1 d5 e7 85 a2 .W.....*.x...... 00:22:29.473 00000090 07 8e 6b 65 eb 73 66 9c 5b f0 27 ec 91 3b c8 f3 ..ke.sf.[.'..;.. 00:22:29.473 000000a0 0d 09 49 ad c3 b0 24 37 a7 fb e2 54 b6 23 e3 24 ..I...$7...T.#.$ 00:22:29.473 000000b0 56 79 f1 cb 30 1f 43 54 a5 e7 d7 af 0b f5 0d 99 Vy..0.CT........ 00:22:29.473 000000c0 30 df cc a9 7f ac fa 86 09 e4 8b b8 9b 0c 60 05 0.............`. 00:22:29.473 000000d0 b1 2f 65 57 a2 2e b9 dc 17 5f 5a ca af 7d 2c 92 ./eW....._Z..},. 00:22:29.473 000000e0 9c 46 0d f8 18 51 36 a0 af bc c2 a7 7d 2b 3b ad .F...Q6.....}+;. 00:22:29.473 000000f0 28 45 ad 52 4b bb 57 31 1a 1c 04 7f 15 40 44 f6 (E.RK.W1.....@D. 00:22:29.473 00000100 d4 79 9d 70 2e ef 9f 05 26 b0 83 60 67 17 52 cf .y.p....&..`g.R. 00:22:29.473 00000110 e1 a6 54 c4 58 ed 65 06 d2 c1 3f 7c 95 d7 9c b2 ..T.X.e...?|.... 00:22:29.473 00000120 69 ed 53 07 19 46 ab 59 fa fe 15 cf ef 27 95 2c i.S..F.Y.....'., 00:22:29.473 00000130 ce ad bc ad a8 5c 58 64 42 04 bd 57 86 cd 4a 3f .....\XdB..W..J? 00:22:29.473 00000140 b6 1f 01 69 15 54 f4 ba b8 d6 14 6e 68 14 e4 55 ...i.T.....nh..U 00:22:29.473 00000150 74 12 74 2e 5b 56 9c ec 19 04 c2 a9 4d 83 0c 54 t.t.[V......M..T 00:22:29.473 00000160 f2 3c f7 66 a0 52 0f ee 47 95 71 b8 60 6e 98 e1 .<.f.R..G.q.`n.. 00:22:29.473 00000170 94 41 ad fe 88 4d 2a 42 3c 05 52 95 4f af 08 22 .A...M*B<.R.O.." 00:22:29.473 00000180 ba 5b 98 43 76 c7 50 bb d1 85 90 2e 38 c2 66 93 .[.Cv.P.....8.f. 00:22:29.473 00000190 20 e6 dc b5 fb 5b 1d d4 b1 38 fc 71 52 05 bd 27 ....[...8.qR..' 00:22:29.473 000001a0 24 73 58 ca 9c 2b 4f b2 18 8e 5e ed 37 d4 68 3f $sX..+O...^.7.h? 00:22:29.473 000001b0 ec 28 09 ac 99 12 fd d3 c5 1f c7 ff e8 c3 d7 c8 .(.............. 00:22:29.473 000001c0 c1 cc 7d 5b d2 ab 8c 57 8d 9e 3b fa 33 7d fb 33 ..}[...W..;.3}.3 00:22:29.473 000001d0 b7 5e 5e 4c d3 58 21 27 eb a3 b7 f2 b7 4d 82 85 .^^L.X!'.....M.. 00:22:29.473 000001e0 e8 8f a5 e1 4d 69 c3 f6 76 fb 5b 68 47 87 be 16 ....Mi..v.[hG... 00:22:29.473 000001f0 4e 21 68 53 26 1d f6 75 3e 98 6d ed d7 e7 34 7a N!hS&..u>.m...4z 00:22:29.473 00000200 b6 c8 69 12 1d 8e a2 92 de a1 9e 92 30 4b c0 6f ..i.........0K.o 00:22:29.473 00000210 f5 67 dc 66 59 b8 3e 08 a9 a8 90 ea 5f ed 77 c6 .g.fY.>....._.w. 00:22:29.473 00000220 90 b4 60 85 12 7e b2 ba ba c9 67 2c f0 2f bc 12 ..`..~....g,./.. 00:22:29.473 00000230 3d c6 ce ee d9 4b c1 23 43 bf 7d c8 7a ee 40 f2 =....K.#C.}.z.@. 00:22:29.473 00000240 ae 5e 14 c3 f0 6f cc bc 95 ca f2 f2 59 9c 12 47 .^...o......Y..G 00:22:29.473 00000250 eb c8 bf cf 98 88 8e 73 b8 a8 2a 62 f6 85 cc a1 .......s..*b.... 00:22:29.473 00000260 97 19 76 7d 07 18 d3 ad ab a8 a7 1f cd 30 39 3f ..v}.........09? 00:22:29.473 00000270 86 93 7d 4e a6 1b 05 63 b3 76 ec ef 2c 00 90 02 ..}N...c.v..,... 00:22:29.473 00000280 96 30 4e 22 66 57 0e c2 ba e0 d3 68 77 56 cd 3a .0N"fW.....hwV.: 00:22:29.473 00000290 20 48 91 ad c8 89 26 5b 52 07 6a 6c 09 8c 6c 43 H....&[R.jl..lC 00:22:29.473 000002a0 f5 37 0c 6f 86 88 55 d7 9e 53 7c 98 be 36 cf 95 .7.o..U..S|..6.. 00:22:29.473 000002b0 0d d4 07 82 94 78 73 41 1b 6d d4 42 2d b3 2c f6 .....xsA.m.B-.,. 00:22:29.473 000002c0 31 f6 99 e5 10 cf 40 af 7b 5c 2e 67 35 0b 3f 56 1.....@.{\.g5.?V 00:22:29.473 000002d0 c1 ce ae b1 50 8f a8 14 64 ee f8 72 5b 8e cf 96 ....P...d..r[... 00:22:29.473 000002e0 2f 8f 36 0d 6d 6f 29 b5 a0 46 60 f0 73 04 c6 9e /.6.mo)..F`.s... 00:22:29.473 000002f0 07 0b 60 a9 63 48 43 aa 4c 28 b8 2f 09 8b 4c 73 ..`.cHC.L(./..Ls 00:22:29.473 00000300 18 7b da 83 ce 40 25 73 8e 21 0b 9e 90 d4 bd b1 .{...@%s.!...... 00:22:29.473 00000310 2c 29 4a db 3f fa 19 da f8 71 72 04 02 7b f9 49 ,)J.?....qr..{.I 00:22:29.473 00000320 e1 12 75 ac 2c 2a cc 0f 75 1d 2f 7f 57 3d 86 b3 ..u.,*..u./.W=.. 00:22:29.473 00000330 80 67 7c 30 a8 4f 7e 3d 8f 55 52 2c 83 ff f6 4c .g|0.O~=.UR,...L 00:22:29.473 00000340 3a 3f db 1d 81 70 5c c2 a0 4d 09 2a 4b 16 4e 63 :?...p\..M.*K.Nc 00:22:29.473 00000350 17 cd 51 a5 58 9a 40 31 23 f6 86 62 3d 9c d5 c0 ..Q.X.@1#..b=... 00:22:29.473 00000360 56 b1 95 99 4b a8 ee 6f 4a 18 4a 9d 6b 3d 9e ec V...K..oJ.J.k=.. 00:22:29.473 00000370 27 97 f2 b0 56 41 f2 80 9d 6a 88 5d ec 28 78 b4 '...VA...j.].(x. 00:22:29.473 00000380 c0 82 81 1e f4 6e 0b a4 30 ab 86 c1 90 fc 16 2a .....n..0......* 00:22:29.473 00000390 03 ac f7 9a 09 d2 0a b0 d9 65 a1 d0 c7 e0 25 2f .........e....%/ 00:22:29.473 000003a0 89 f5 b4 34 5e 2c 76 5a ff e9 78 b8 2f af 93 2f ...4^,vZ..x./../ 00:22:29.473 000003b0 fa 94 0c 22 da 4b 2e 2f 27 b5 2e c5 28 19 6b e8 ...".K./'...(.k. 00:22:29.473 000003c0 a1 5e e2 bb 65 86 c3 7e d0 73 3b dc a9 2e b6 ab .^..e..~.s;..... 00:22:29.473 000003d0 02 d8 94 4b 3a d5 36 4e 87 0d 25 16 fa a2 e2 21 ...K:.6N..%....! 00:22:29.473 000003e0 87 ba ea 59 4e 06 d7 4b 83 68 e1 08 45 55 d2 99 ...YN..K.h..EU.. 00:22:29.473 000003f0 e7 f7 bd db 4c 4c e0 3e 59 b5 8f f6 ab f1 5b 4d ....LL.>Y.....[M 00:22:29.473 dh secret: 00:22:29.473 00000000 29 7b 9b b0 f2 6c 22 1f 63 85 6f fc 5b 82 cd c6 ){...l".c.o.[... 00:22:29.473 00000010 b7 5a 83 56 ce 20 19 ef 2d 8c a4 8c 04 91 69 52 .Z.V. ..-.....iR 00:22:29.473 00000020 60 da c6 cb fb 84 29 ff 33 07 97 2b 2f 91 87 0b `.....).3..+/... 00:22:29.473 00000030 21 2e e5 d7 61 a6 85 5f 61 ab fc 07 82 c9 a6 6a !...a.._a......j 00:22:29.473 00000040 09 c0 a1 cd a0 0c 22 83 8f a7 13 8c 1c 93 be 7b ......"........{ 00:22:29.473 00000050 35 27 60 20 bd c0 2d e4 1c 79 82 09 aa d5 7c 43 5'` ..-..y....|C 00:22:29.473 00000060 56 03 c8 58 72 93 58 69 9b d3 e1 35 a7 94 36 fd V..Xr.Xi...5..6. 00:22:29.473 00000070 47 83 75 fe 02 c8 11 cd f6 8e 81 57 8c 66 a8 9c G.u........W.f.. 00:22:29.473 00000080 c8 9a 1c d8 6d dd b1 74 f2 1d 4e 3b 35 c8 46 2f ....m..t..N;5.F/ 00:22:29.473 00000090 e3 de 7e 28 c4 84 a9 03 8c 37 4a 8d 4b 26 fd 6f ..~(.....7J.K&.o 00:22:29.473 000000a0 52 2a 25 04 c5 1c f5 d2 12 2f b8 e0 e9 c1 02 39 R*%....../.....9 00:22:29.473 000000b0 4a 2b 7c f5 4d 62 64 80 97 bd d7 c3 8c 98 c9 a2 J+|.Mbd......... 00:22:29.473 000000c0 39 a9 56 b9 e1 00 de 4f 4b 5f a0 d9 ac a9 d4 9c 9.V....OK_...... 00:22:29.473 000000d0 d4 80 0e a6 4e 22 37 a4 48 36 30 83 1f 7b 7a fb ....N"7.H60..{z. 00:22:29.473 000000e0 44 dc ad cf c4 bf 20 c6 99 28 85 00 a8 4c 6a 54 D..... ..(...LjT 00:22:29.473 000000f0 ed c7 9e c4 23 ad 81 86 40 00 9c 3f 63 ed e6 6d ....#...@..?c..m 00:22:29.473 00000100 9e 1d b6 fc e8 69 6c f2 83 4b 5d 19 e7 43 ac 25 .....il..K]..C.% 00:22:29.473 00000110 33 8f 5f 28 ae 22 0b 04 0d 3a ad 58 5e 06 b5 cf 3._(."...:.X^... 00:22:29.473 00000120 3b 2b c4 28 d4 59 d4 60 7d d0 77 12 ec 28 4e 04 ;+.(.Y.`}.w..(N. 00:22:29.473 00000130 27 0f 11 5b a6 72 2a 03 0d f8 56 b1 14 c1 c9 d6 '..[.r*...V..... 00:22:29.473 00000140 44 b8 8e 65 7f 77 4d 67 40 50 81 2e 01 f6 da 3d D..e.wMg@P.....= 00:22:29.473 00000150 54 2a 50 4a 31 22 a2 87 05 71 d2 ad 6e 84 0b 6b T*PJ1"...q..n..k 00:22:29.473 00000160 e1 39 2b d0 d6 42 4e 51 d7 37 f2 c4 4d 87 d9 d0 .9+..BNQ.7..M... 00:22:29.473 00000170 b9 b5 65 91 dc 7c 48 74 20 6c ae 81 05 b3 ec db ..e..|Ht l...... 00:22:29.473 00000180 0f 42 3d e0 51 13 7e 94 3f 94 8a 54 33 62 79 05 .B=.Q.~.?..T3by. 00:22:29.473 00000190 ec 98 d1 4b 60 98 7e ef 7b 56 33 8d 90 e1 cd d1 ...K`.~.{V3..... 00:22:29.473 000001a0 e9 9a 4b 4a ca 46 4b 20 02 90 cd 0b 1f 44 6c 46 ..KJ.FK .....DlF 00:22:29.473 000001b0 71 d0 4d fd cf 38 9f 20 d0 ed 7e 29 9f 6b da 87 q.M..8. ..~).k.. 00:22:29.473 000001c0 81 f6 10 d9 0c b1 5a 90 2f 8f a0 79 24 f6 b3 8a ......Z./..y$... 00:22:29.473 000001d0 3f 35 4b e2 1b 5b 6c 83 e4 3c c5 4c 66 de df 65 ?5K..[l..<.Lf..e 00:22:29.473 000001e0 0c f0 43 49 7a f8 77 f2 c0 47 d4 17 33 dd 70 f7 ..CIz.w..G..3.p. 00:22:29.473 000001f0 6f c4 a2 31 56 3b 19 cf bd c7 5a e9 05 60 ba 24 o..1V;....Z..`.$ 00:22:29.473 00000200 22 fd 49 4b b8 74 af a2 9a 86 20 a7 b5 25 61 0f ".IK.t.... ..%a. 00:22:29.473 00000210 81 18 01 13 b2 db 06 d0 77 36 a7 ff 64 4a e7 37 ........w6..dJ.7 00:22:29.473 00000220 35 f1 76 81 81 95 91 46 27 bd fe c4 ee 59 dd 81 5.v....F'....Y.. 00:22:29.473 00000230 99 9f 5a 84 63 27 e7 44 14 f0 aa fd fc 7a 07 2a ..Z.c'.D.....z.* 00:22:29.473 00000240 87 d5 c7 84 a8 10 99 de c6 2f 7e 34 a7 3a c4 cd ........./~4.:.. 00:22:29.473 00000250 a4 b9 5b d1 e0 66 9b 5f 59 94 ea e8 d3 a6 62 aa ..[..f._Y.....b. 00:22:29.473 00000260 27 eb 4f 37 11 be 25 54 ac c3 7e 80 67 73 e2 07 '.O7..%T..~.gs.. 00:22:29.473 00000270 fd ed ed 25 bd f2 2a 99 d8 a0 a3 f4 bf aa 1d 2c ...%..*........, 00:22:29.473 00000280 32 52 db 2e 04 b9 1e c5 3f cd c7 56 d6 bc 7c c4 2R......?..V..|. 00:22:29.473 00000290 5f a8 8c b6 7a 94 06 c2 40 0a 97 97 d4 19 2d f2 _...z...@.....-. 00:22:29.473 000002a0 56 a3 61 b4 ed 81 77 64 9e 42 ec af 25 07 cd 68 V.a...wd.B..%..h 00:22:29.473 000002b0 7b e1 0b 5f 91 3a 7e e7 67 89 ac 14 d7 04 13 e2 {.._.:~.g....... 00:22:29.473 000002c0 e9 44 fd 34 e3 6c e6 26 18 92 3c 85 1e 7e 7e 51 .D.4.l.&..<..~~Q 00:22:29.473 000002d0 de 21 a5 59 61 56 2f 02 bf 89 da 40 79 ee 03 2f .!.YaV/....@y../ 00:22:29.473 000002e0 57 9a c7 2e fb b2 d5 f4 94 f0 73 a8 34 63 67 c1 W.........s.4cg. 00:22:29.473 000002f0 18 a0 45 94 bf 02 ac 8f d4 04 43 f1 db ce f9 57 ..E.......C....W 00:22:29.473 00000300 c1 dc 89 a5 e9 9e ad e3 99 5f d2 b9 39 8a b0 18 ........._..9... 00:22:29.473 00000310 10 f1 b2 d2 02 6e 51 5f f5 b7 2e c2 4d 47 10 9f .....nQ_....MG.. 00:22:29.473 00000320 a7 c3 05 cb 62 3c 9e e4 6b 07 9f 35 8a 6b 7e 6d ....b<..k..5.k~m 00:22:29.473 00000330 52 22 d5 35 51 63 4a 05 e3 26 03 be 46 44 37 56 R".5QcJ..&..FD7V 00:22:29.473 00000340 d1 10 67 6e 4d 3b a1 a1 70 57 29 02 9d db b1 7c ..gnM;..pW)....| 00:22:29.473 00000350 b6 45 11 eb 11 28 6a ca 8b 8e bc a9 87 b9 28 07 .E...(j.......(. 00:22:29.473 00000360 d6 c4 17 80 29 e9 63 be c3 c9 97 4d b6 2f ff 81 ....).c....M./.. 00:22:29.473 00000370 cf 9c d0 13 1a bb 19 ce 39 f2 18 e5 40 52 ac c2 ........9...@R.. 00:22:29.473 00000380 d0 f5 20 ef a0 b7 4d cf ae c0 31 59 f0 9f 50 16 .. ...M...1Y..P. 00:22:29.473 00000390 6d 0a 8f a1 d7 e0 f1 b1 b1 f8 32 82 c1 82 33 87 m.........2...3. 00:22:29.473 000003a0 15 b0 9f a3 1c 63 54 73 a4 9a 76 cd 80 66 4a 90 .....cTs..v..fJ. 00:22:29.473 000003b0 9e 9a 4e 6a d4 8d 0c 9d 28 05 4a 22 b7 a4 b0 32 ..Nj....(.J"...2 00:22:29.473 000003c0 91 32 1a dd d7 a2 b3 f9 e8 9f 1c e1 2c f9 bd 6c .2..........,..l 00:22:29.473 000003d0 61 91 6c a9 5f b2 5d 1f 5b 95 22 c5 de c3 7d fc a.l._.].[."...}. 00:22:29.474 000003e0 4b f4 c8 65 88 17 61 0e 3b 27 f3 e8 fd 98 77 5c K..e..a.;'....w\ 00:22:29.474 000003f0 7b 9d 83 da 8c e6 b5 ff 5a d7 4d ae a6 4b 39 af {.......Z.M..K9. 00:22:29.474 [2024-09-27 13:27:09.864448] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key1, hash=1, dhgroup=5, seq=3775755216, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.474 [2024-09-27 13:27:09.864788] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.474 [2024-09-27 13:27:09.949141] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.474 [2024-09-27 13:27:09.949735] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.474 [2024-09-27 13:27:09.949985] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.474 [2024-09-27 13:27:09.950293] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.474 [2024-09-27 13:27:10.100782] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.474 [2024-09-27 13:27:10.101050] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.474 [2024-09-27 13:27:10.101185] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 5 (ffdhe8192) 00:22:29.474 [2024-09-27 13:27:10.101423] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.474 [2024-09-27 13:27:10.101649] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.474 ctrlr pubkey: 00:22:29.474 00000000 f5 24 8e 13 dd 04 91 65 d4 48 46 cd 39 9c 20 f7 .$.....e.HF.9. . 00:22:29.474 00000010 74 9d eb e4 20 38 47 3b c1 85 77 d8 14 d5 3d c9 t... 8G;..w...=. 00:22:29.474 00000020 82 ea 45 67 8a 4d 24 5d a4 0a 36 5c f8 6f d8 f6 ..Eg.M$]..6\.o.. 00:22:29.474 00000030 c1 e9 a6 a6 f2 42 e3 5c 39 0d 56 60 b5 07 a9 24 .....B.\9.V`...$ 00:22:29.474 00000040 66 83 de fa 6b bf 10 22 0b 85 ef 2e 93 b7 81 45 f...k..".......E 00:22:29.474 00000050 0c 2e 94 97 bb f3 6a c7 e4 2d 32 4b 82 15 5d 61 ......j..-2K..]a 00:22:29.474 00000060 5b 24 c5 57 52 b6 a3 d1 3e 86 cd ed af c4 ce 86 [$.WR...>....... 00:22:29.474 00000070 6a 35 6e ab 2b cb ef 0e 91 3b d4 bc 6b 0a d6 98 j5n.+....;..k... 00:22:29.474 00000080 49 9a e2 b2 4a fe ec 9b 0d e8 dc 81 b4 e8 19 4d I...J..........M 00:22:29.474 00000090 cf 20 d6 df 67 ac 69 ea 2d 14 7b 05 2c d3 56 a8 . ..g.i.-.{.,.V. 00:22:29.474 000000a0 df ce 5c 27 d9 7e 07 c6 2e 60 1c f5 65 bb 8f 00 ..\'.~...`..e... 00:22:29.474 000000b0 13 06 7d f7 b6 86 3c 07 25 88 12 26 ca 53 6c 36 ..}...<.%..&.Sl6 00:22:29.474 000000c0 2b c6 81 49 4e 04 b4 a8 ef fa 56 85 55 af 5e 5e +..IN.....V.U.^^ 00:22:29.474 000000d0 1e e1 cd 01 76 88 6d d0 9f 0c 04 2d 99 41 9c 90 ....v.m....-.A.. 00:22:29.474 000000e0 f2 e9 8b ab b9 b1 74 4a 73 f1 e6 50 c6 47 52 78 ......tJs..P.GRx 00:22:29.474 000000f0 7b 75 aa 62 8e 6b 33 c4 cc 6b 43 c8 e6 6b c1 ab {u.b.k3..kC..k.. 00:22:29.474 00000100 71 77 af 8b e2 45 ae 9c fb f7 b8 13 8e ed 85 35 qw...E.........5 00:22:29.474 00000110 bb e2 be 8c 7d 57 b2 e9 2a db d3 b6 3c 3e 9f 80 ....}W..*...<>.. 00:22:29.474 00000120 ca 50 6f 5b d0 ca 20 36 84 82 09 a0 3d ff c5 0c .Po[.. 6....=... 00:22:29.474 00000130 9a fc 9b 3f 58 f3 2d 6a 78 cb 5c 82 55 93 f7 99 ...?X.-jx.\.U... 00:22:29.474 00000140 b1 f1 a4 21 fa de 3d 92 d8 a2 f4 a4 21 c5 7e 96 ...!..=.....!.~. 00:22:29.474 00000150 a4 cd cf 10 25 4d a6 1e 77 dc 90 03 3e 47 a0 91 ....%M..w...>G.. 00:22:29.474 00000160 16 b1 dd 09 ce 92 18 34 72 77 a4 b2 ea 54 ad 29 .......4rw...T.) 00:22:29.474 00000170 cc d9 6f 44 35 0e 9c a9 f8 3c e6 45 c0 2d 38 86 ..oD5....<.E.-8. 00:22:29.474 00000180 77 0b e8 ea a4 72 4d aa 21 fb 3b 93 6a d5 aa de w....rM.!.;.j... 00:22:29.474 00000190 14 60 cf 54 56 f9 0c e7 a4 7f a2 f5 0b bd d4 15 .`.TV........... 00:22:29.474 000001a0 e2 94 2e 39 d5 3a e0 8f 79 b4 1d de 9e c8 7e 73 ...9.:..y.....~s 00:22:29.474 000001b0 47 f3 86 4b 31 6f 90 83 64 7b 75 e3 f9 e3 08 02 G..K1o..d{u..... 00:22:29.474 000001c0 b0 52 b9 ca a9 b1 6f 94 fc 79 da f9 db 96 1d 20 .R....o..y..... 00:22:29.474 000001d0 34 8a fd 3a cf ea be 03 b9 73 a2 68 d8 40 90 4b 4..:.....s.h.@.K 00:22:29.474 000001e0 52 5e 0a 8f 01 98 6e 92 7a 24 75 bd 70 44 7c e8 R^....n.z$u.pD|. 00:22:29.474 000001f0 ea 83 e5 60 fe ca 94 46 a6 52 d4 4a cb 20 92 9b ...`...F.R.J. .. 00:22:29.474 00000200 7c d7 c1 e9 cb 05 84 1c 49 33 cd 24 e4 6a d4 9c |.......I3.$.j.. 00:22:29.474 00000210 c7 b5 ae 1f 10 08 7a f7 a0 21 a9 14 ee a1 2a 79 ......z..!....*y 00:22:29.474 00000220 7e 09 01 53 e8 ed df 48 a1 b7 9e b6 15 8f e5 d8 ~..S...H........ 00:22:29.474 00000230 ca c4 5f 5e c4 17 ba ca 6d 27 e0 e8 06 cd 8e 75 .._^....m'.....u 00:22:29.474 00000240 87 ae 2d b7 52 c6 0a 8e 86 19 49 89 ae a4 19 bb ..-.R.....I..... 00:22:29.474 00000250 ce 85 89 f0 55 0b 65 e6 aa 7f 8e a0 19 1e f2 55 ....U.e........U 00:22:29.474 00000260 3e b1 52 7c 6c c4 de e6 45 6e ba 97 cd b6 24 c5 >.R|l...En....$. 00:22:29.474 00000270 b6 c9 14 d2 f5 c5 60 e6 ff 96 63 23 1e b0 46 1c ......`...c#..F. 00:22:29.474 00000280 36 2c d8 60 7a aa 17 13 13 27 1b b2 0a b3 70 dd 6,.`z....'....p. 00:22:29.474 00000290 32 14 bc 80 84 31 68 e7 67 b8 f1 67 ae c2 fa 41 2....1h.g..g...A 00:22:29.474 000002a0 ff 1a 6e 37 ce c3 cb 1c 36 b2 98 bd 2a 43 d5 87 ..n7....6...*C.. 00:22:29.474 000002b0 13 fd 60 cd a2 e1 74 08 1e 99 97 a4 c6 d1 77 1d ..`...t.......w. 00:22:29.474 000002c0 80 f5 ad 8c f9 b9 57 cf 79 e7 67 a5 97 80 9d c0 ......W.y.g..... 00:22:29.474 000002d0 1c b7 8c 2a 86 41 cc d1 59 43 75 58 4e 02 4d d8 ...*.A..YCuXN.M. 00:22:29.474 000002e0 53 ae 31 5b a4 97 01 81 04 3b 3a f7 08 09 61 00 S.1[.....;:...a. 00:22:29.474 000002f0 28 6a 06 3a 2b 70 4e 3c 62 d3 1c c5 fb 9b e0 3d (j.:+pNE...<.....O.X 00:22:29.474 00000040 b0 96 f3 8c bf b0 65 31 63 b7 4c 60 ba a5 24 d1 ......e1c.L`..$. 00:22:29.474 00000050 98 94 e2 15 c3 05 d2 e0 67 77 e6 e3 2d 52 d7 61 ........gw..-R.a 00:22:29.474 00000060 04 7b 76 17 81 bc 66 93 86 c9 bb 9d 45 bd 2d 85 .{v...f.....E.-. 00:22:29.474 00000070 27 df 1f 95 5e ff ce 11 5d c5 bb ee 38 1c a1 91 '...^...]...8... 00:22:29.474 00000080 cb 54 29 45 f4 d5 7b 9e ab a3 cf 03 99 35 75 96 .T)E..{......5u. 00:22:29.474 00000090 f7 92 1f 4c 84 43 ad e3 82 10 e4 bb 04 2e 13 9c ...L.C.......... 00:22:29.474 000000a0 c8 2a cd 50 83 08 54 90 2d 38 98 83 90 64 77 7b .*.P..T.-8...dw{ 00:22:29.474 000000b0 22 d8 2f a0 eb be 11 fe e8 12 86 f9 46 86 13 24 "./.........F..$ 00:22:29.474 000000c0 e0 4f 03 48 c5 65 1e 2b 07 8d 63 6e 56 2e 42 1c .O.H.e.+..cnV.B. 00:22:29.474 000000d0 0d 76 07 a3 5c 9f 9d e0 cd ab 67 69 76 eb d2 82 .v..\.....giv... 00:22:29.474 000000e0 bc 26 73 bc 91 1a 60 a7 ba fc 82 c7 d7 93 a1 aa .&s...`......... 00:22:29.474 000000f0 69 39 5f 85 6a 9a 21 5a ff 22 b9 42 d5 54 50 80 i9_.j.!Z.".B.TP. 00:22:29.474 00000100 4f c2 a1 d6 85 d4 14 a6 4d 29 ae b7 25 5f 35 00 O.......M)..%_5. 00:22:29.474 00000110 e0 6f fe 23 ca 0d ff 84 9e c0 77 5a 6b 3a f2 10 .o.#......wZk:.. 00:22:29.474 00000120 c0 15 e0 f7 7c c1 78 df f0 08 2e 41 1a ac c0 8b ....|.x....A.... 00:22:29.474 00000130 93 e3 a8 05 e2 51 db 08 aa 80 98 fb 86 77 48 5e .....Q.......wH^ 00:22:29.474 00000140 60 94 0d f2 d4 83 8a 95 35 22 6b 4b 5d 3d 8e 06 `.......5"kK]=.. 00:22:29.474 00000150 77 6c 2f 8c d7 2d ec bf 9d f3 3b 46 74 f6 47 35 wl/..-....;Ft.G5 00:22:29.474 00000160 c6 23 20 06 67 8b ce 4f ee 8e f7 72 60 24 29 5b .# .g..O...r`$)[ 00:22:29.474 00000170 3c 3a d5 79 81 cd 4d eb 9a db 90 0a 64 df dd 8a <:.y..M.....d... 00:22:29.474 00000180 f6 bc 62 32 a5 fa ec b8 24 94 a4 5f 87 7e b5 c7 ..b2....$.._.~.. 00:22:29.474 00000190 27 dd 20 98 30 ef 34 04 a7 f2 79 81 cc 36 52 85 '. .0.4...y..6R. 00:22:29.474 000001a0 b9 6c fc 9a be e3 38 32 3a 24 9f 38 fd 67 c6 2a .l....82:$.8.g.* 00:22:29.474 000001b0 12 f4 77 0b 97 11 b7 66 fa 54 90 85 4d 39 2c 04 ..w....f.T..M9,. 00:22:29.474 000001c0 36 19 cc ba 43 08 d1 4e f5 f4 aa c7 e3 23 3e 49 6...C..N.....#>I 00:22:29.474 000001d0 65 38 c7 76 54 f0 8f e0 ed fc 83 8a 98 19 dc ef e8.vT........... 00:22:29.474 000001e0 7b 32 17 a5 96 b9 ca 52 c2 27 31 30 0f 91 e8 e2 {2.....R.'10.... 00:22:29.474 000001f0 38 c6 09 09 ab 96 d7 41 36 64 06 e6 88 ab c8 7d 8......A6d.....} 00:22:29.474 00000200 5c da e9 a8 80 42 5e ff 43 d1 c6 d5 9d 81 e8 ef \....B^.C....... 00:22:29.474 00000210 ab c3 df 54 d8 e9 0e 19 2f 42 f0 9c 4f b4 4c 1c ...T..../B..O.L. 00:22:29.474 00000220 5e 9e bd f8 7a 63 c4 1f ad 88 eb 6a 4b d4 be ab ^...zc.....jK... 00:22:29.474 00000230 6d a0 26 2e b3 41 ba 0c b6 44 37 8e 75 9e cf 10 m.&..A...D7.u... 00:22:29.474 00000240 7f 89 21 2e 4c 48 8d 2f 2c 31 4d c4 3d 45 8b 67 ..!.LH./,1M.=E.g 00:22:29.474 00000250 e2 29 99 2b e0 78 df de f6 01 2e d5 da 4d 91 a3 .).+.x.......M.. 00:22:29.474 00000260 57 38 0a 64 45 6c 1d 8b 97 19 d3 04 76 2e f6 03 W8.dEl......v... 00:22:29.474 00000270 84 e5 1e e4 99 29 1b ac 9f b5 a5 28 1b 4a d1 80 .....).....(.J.. 00:22:29.474 00000280 a6 cb 3c ce be 65 f1 90 b0 32 1d 74 79 5b fc 41 ..<..e...2.ty[.A 00:22:29.474 00000290 3b 3b e6 c6 d5 95 ed f9 eb a4 e3 4c f9 8b 2a 06 ;;.........L..*. 00:22:29.474 000002a0 5e f6 d9 4b 14 bc c0 08 dd b6 d5 0c 19 40 79 66 ^..K.........@yf 00:22:29.474 000002b0 ce e7 c6 d5 17 b5 d2 dc 2e be b4 bf 77 9a db 45 ............w..E 00:22:29.474 000002c0 ce 3a d0 ba 85 22 b3 3b 72 ec 80 07 79 56 60 52 .:...".;r...yV`R 00:22:29.474 000002d0 02 34 af 2b 77 d0 06 04 0d 50 a3 3d 16 99 fd bd .4.+w....P.=.... 00:22:29.474 000002e0 95 ee 26 e3 97 64 a2 b3 12 79 2b 49 c3 41 f8 6a ..&..d...y+I.A.j 00:22:29.474 000002f0 40 22 2d c2 6c 7b 56 d9 35 2b aa bc a2 74 fe 46 @"-.l{V.5+...t.F 00:22:29.474 00000300 42 8f ee 0e 13 e8 1f 85 1d 11 0f 66 29 5a 94 10 B..........f)Z.. 00:22:29.474 00000310 cb 6a a8 f5 75 d3 b6 75 68 ab 19 a0 dc 46 3d 6d .j..u..uh....F=m 00:22:29.474 00000320 f2 55 78 e0 6f a1 3a 32 2c 27 e4 53 47 d3 d2 51 .Ux.o.:2,'.SG..Q 00:22:29.474 00000330 e6 ac 04 24 1f ab 66 d8 bc cb c6 9a 8d 60 46 49 ...$..f......`FI 00:22:29.474 00000340 e2 66 1d 82 16 62 36 cd f0 5e c7 73 0c 75 b4 a6 .f...b6..^.s.u.. 00:22:29.474 00000350 40 92 4e 48 1b e1 77 f0 bf 04 30 00 aa c5 47 ff @.NH..w...0...G. 00:22:29.475 00000360 ed 7b fa 9f 09 c6 1b a5 c7 f1 93 c1 2a 35 8b 9c .{..........*5.. 00:22:29.475 00000370 9c 95 12 dd c5 8f d3 ed 97 67 c5 8e 5f b9 06 a6 .........g.._... 00:22:29.475 00000380 d5 89 e4 bd 29 0f d3 d4 b4 ab a3 3e 76 b6 15 c9 ....)......>v... 00:22:29.475 00000390 21 9d 55 4c e7 fa 57 92 66 52 b6 ab 16 ff 9c de !.UL..W.fR...... 00:22:29.475 000003a0 8e 7a e4 a0 48 0a ce 0e 10 3e 65 30 37 e7 a6 d2 .z..H....>e07... 00:22:29.475 000003b0 a9 f7 35 78 79 4b 84 e5 ab f5 14 61 3a 55 4f 4f ..5xyK.....a:UOO 00:22:29.475 000003c0 af b7 5b 43 78 36 4b ea 50 42 43 ba c1 47 d8 03 ..[Cx6K.PBC..G.. 00:22:29.475 000003d0 4a aa cf 5f 22 71 da 90 ba 45 09 f4 b0 34 ee c3 J.._"q...E...4.. 00:22:29.475 000003e0 5c c6 a6 c5 c7 c7 ab 9f 3d 46 ef 6a 2d 5f 06 67 \.......=F.j-_.g 00:22:29.475 000003f0 b5 0e 92 3e 1a 6a fc 39 1a 99 e3 14 7a 33 9f 7b ...>.j.9....z3.{ 00:22:29.475 dh secret: 00:22:29.475 00000000 2f 26 8a 7e 90 d4 c0 12 73 1d 5a a0 56 cb 74 b8 /&.~....s.Z.V.t. 00:22:29.475 00000010 95 34 12 5e 76 ff db 47 55 80 82 c5 7e 1c 58 7b .4.^v..GU...~.X{ 00:22:29.475 00000020 bb d1 dc 66 60 80 16 71 ad 1a d4 57 1b 83 0b 31 ...f`..q...W...1 00:22:29.475 00000030 80 2b 4c 5a 33 82 46 3d 25 3f 32 e8 d8 09 d6 54 .+LZ3.F=%?2....T 00:22:29.475 00000040 2a f2 9a ed 79 e0 56 4b c2 e0 6d b3 f9 04 2d 5a *...y.VK..m...-Z 00:22:29.475 00000050 f2 46 1a 1a 1f 47 b3 0f 2e 68 82 c8 1b 2b 40 ab .F...G...h...+@. 00:22:29.475 00000060 5b 5d 52 7a cb c3 c8 6c 5b 64 7a 39 d6 ff dd 5a []Rz...l[dz9...Z 00:22:29.475 00000070 0f 58 ba d6 1c 86 af d2 f6 3f b7 22 5b 3c 5b 87 .X.......?."[<[. 00:22:29.475 00000080 e3 a9 f7 51 7d 0d 6f 52 50 40 fd 9e a2 1a 5a 1d ...Q}.oRP@....Z. 00:22:29.475 00000090 14 ae dd 5a 8e 51 39 fa d0 3d 43 3a 99 f6 9f 46 ...Z.Q9..=C:...F 00:22:29.475 000000a0 0f 91 f3 1b 4c a4 98 98 22 52 23 78 ef 45 f9 56 ....L..."R#x.E.V 00:22:29.475 000000b0 e3 03 ca 6e 76 7a 5f 88 f9 35 5a 22 77 8e da d5 ...nvz_..5Z"w... 00:22:29.475 000000c0 cc 23 cc 01 d0 3c 2e 03 49 92 58 db 17 80 dc 1f .#...<..I.X..... 00:22:29.475 000000d0 f4 c9 65 6b b6 27 c5 82 7a a9 02 eb bd 2b 62 42 ..ek.'..z....+bB 00:22:29.475 000000e0 c2 07 01 bc 47 88 49 74 d5 dd 53 1d 5f 73 74 ba ....G.It..S._st. 00:22:29.475 000000f0 d6 fc ae 6a d2 bc a0 9e 04 46 72 3e 0d f7 58 cb ...j.....Fr>..X. 00:22:29.475 00000100 f9 6d de 20 49 c4 ac a8 52 4d 98 62 8f 98 a6 12 .m. I...RM.b.... 00:22:29.475 00000110 77 77 cf 81 ae 93 c8 6f bc e1 a6 9b e7 05 74 8f ww.....o......t. 00:22:29.475 00000120 ce ba fa 5a 21 a4 6f 74 36 50 32 0b 60 e9 12 7c ...Z!.ot6P2.`..| 00:22:29.475 00000130 3d c3 67 e2 0f e7 0c 02 ca 5e 15 8d 20 03 15 b8 =.g......^.. ... 00:22:29.475 00000140 0c af 03 44 ba 0e 6d cd c2 c4 47 38 74 6f cd e9 ...D..m...G8to.. 00:22:29.475 00000150 45 2e b9 92 0e e3 ff bc f7 0f 49 c4 39 88 d0 86 E.........I.9... 00:22:29.475 00000160 1b cd 16 77 75 bc 24 6e f5 33 c9 c8 80 23 1e b6 ...wu.$n.3...#.. 00:22:29.475 00000170 c6 5b 21 79 30 09 e7 0d 2b fb 25 a1 56 58 84 49 .[!y0...+.%.VX.I 00:22:29.475 00000180 46 80 be f1 40 90 90 38 c4 e3 78 3d 91 30 42 93 F...@..8..x=.0B. 00:22:29.475 00000190 3b 83 47 44 0a 43 b7 a3 e5 3c 2c c9 7c 21 6b 8d ;.GD.C...<,.|!k. 00:22:29.475 000001a0 7c a8 95 0f 59 55 cc e3 c7 eb c5 36 22 87 d8 52 |...YU.....6"..R 00:22:29.475 000001b0 b5 27 f4 9e 9f 58 69 fe 21 f8 26 f5 4a be 99 af .'...Xi.!.&.J... 00:22:29.475 000001c0 29 14 7c 4c 24 1e 3a 36 8d 58 0c 1e 33 b5 8c 46 ).|L$.:6.X..3..F 00:22:29.475 000001d0 24 84 28 ea f7 d7 5e cb c4 2a b3 b3 ae 45 18 2f $.(...^..*...E./ 00:22:29.475 000001e0 e6 bd 22 71 07 82 05 78 1d 67 65 92 5a 4b 4a 21 .."q...x.ge.ZKJ! 00:22:29.475 000001f0 48 61 4b 28 44 3f 0a be 6f dd 62 c7 63 9b 62 2d HaK(D?..o.b.c.b- 00:22:29.475 00000200 ee 1f b8 9b da 5b dd bb 4a 5f 38 cf dc 5d 0a 32 .....[..J_8..].2 00:22:29.475 00000210 08 7e 00 1a 29 fc 14 d6 48 43 b2 1a 66 1d 8a d3 .~..)...HC..f... 00:22:29.475 00000220 3a d3 2b 6c 54 f3 82 12 86 88 0b af 88 e9 15 1f :.+lT........... 00:22:29.475 00000230 18 22 ff 40 60 bb 88 76 55 65 61 9e 7d d5 96 09 .".@`..vUea.}... 00:22:29.475 00000240 b7 96 cb 0c 88 a4 b7 2a 7c 23 fc 4d 2f 54 1c 36 .......*|#.M/T.6 00:22:29.475 00000250 bd 25 49 41 f1 f7 3c 03 79 60 29 1b 9a a5 e4 9d .%IA..<.y`)..... 00:22:29.475 00000260 57 90 ff 8f 16 fa ab 6d 54 f8 cf 47 6f fc 24 f6 W......mT..Go.$. 00:22:29.475 00000270 fd f5 76 a3 27 47 79 e6 ec 7b 21 41 0b 76 74 d6 ..v.'Gy..{!A.vt. 00:22:29.475 00000280 02 ad 59 ca 0e 63 08 4c d2 33 f7 00 ea 71 13 53 ..Y..c.L.3...q.S 00:22:29.475 00000290 fc 04 04 5e 76 5e 8c 03 67 30 29 be 66 e4 db 05 ...^v^..g0).f... 00:22:29.475 000002a0 16 85 d9 b0 68 7a 4c 93 d0 a5 a9 2a 9e 2c 38 f5 ....hzL....*.,8. 00:22:29.475 000002b0 60 f7 0d de c3 12 a2 a3 0e 23 08 7d f6 e9 1a 42 `........#.}...B 00:22:29.475 000002c0 c9 48 aa b3 1c dd c5 1d 38 64 c1 ac eb 4b 39 aa .H......8d...K9. 00:22:29.475 000002d0 bb 09 82 44 4e 67 fc a8 62 09 3c 9b 3c f9 9c da ...DNg..b.<.<... 00:22:29.475 000002e0 ca 25 4d 76 07 47 c0 6f 01 a7 15 a2 0d 3b 02 b5 .%Mv.G.o.....;.. 00:22:29.475 000002f0 73 7e 7f a1 0e 2a a6 95 69 4f a5 41 4e 3b 7c d9 s~...*..iO.AN;|. 00:22:29.475 00000300 b5 d2 6f f0 70 3f 9e ed f6 4f 75 11 9c 62 06 36 ..o.p?...Ou..b.6 00:22:29.475 00000310 8a fd ba b1 30 6a 79 49 fb e6 86 bd 1e 43 9a 3b ....0jyI.....C.; 00:22:29.475 00000320 49 b4 19 b2 e1 55 77 38 dd 61 0c 61 e1 0d 8b 37 I....Uw8.a.a...7 00:22:29.475 00000330 0e d6 3b 48 44 ba 0d e9 e6 2a a0 03 41 12 f1 58 ..;HD....*..A..X 00:22:29.475 00000340 d6 a1 47 cb fc 60 12 17 18 b0 02 c7 44 21 f2 9d ..G..`......D!.. 00:22:29.475 00000350 e0 17 94 c7 dc 54 48 91 0a 53 5d 41 85 af f7 bf .....TH..S]A.... 00:22:29.475 00000360 80 70 69 d9 d6 f3 b7 74 c9 01 09 88 23 8f 8b 17 .pi....t....#... 00:22:29.475 00000370 ee f9 eb 57 19 3d 65 72 d8 bb b1 d6 9b 38 5e e8 ...W.=er.....8^. 00:22:29.475 00000380 0c 0c 23 c5 90 a6 49 ed 4e fd ab 45 66 c8 90 e9 ..#...I.N..Ef... 00:22:29.475 00000390 56 33 e8 1e 3c 6a ed 0c 2a 3f 4d 9a 53 86 e7 2f V3......... 00:22:29.475 00000070 6a 35 6e ab 2b cb ef 0e 91 3b d4 bc 6b 0a d6 98 j5n.+....;..k... 00:22:29.475 00000080 49 9a e2 b2 4a fe ec 9b 0d e8 dc 81 b4 e8 19 4d I...J..........M 00:22:29.475 00000090 cf 20 d6 df 67 ac 69 ea 2d 14 7b 05 2c d3 56 a8 . ..g.i.-.{.,.V. 00:22:29.475 000000a0 df ce 5c 27 d9 7e 07 c6 2e 60 1c f5 65 bb 8f 00 ..\'.~...`..e... 00:22:29.475 000000b0 13 06 7d f7 b6 86 3c 07 25 88 12 26 ca 53 6c 36 ..}...<.%..&.Sl6 00:22:29.475 000000c0 2b c6 81 49 4e 04 b4 a8 ef fa 56 85 55 af 5e 5e +..IN.....V.U.^^ 00:22:29.475 000000d0 1e e1 cd 01 76 88 6d d0 9f 0c 04 2d 99 41 9c 90 ....v.m....-.A.. 00:22:29.475 000000e0 f2 e9 8b ab b9 b1 74 4a 73 f1 e6 50 c6 47 52 78 ......tJs..P.GRx 00:22:29.475 000000f0 7b 75 aa 62 8e 6b 33 c4 cc 6b 43 c8 e6 6b c1 ab {u.b.k3..kC..k.. 00:22:29.475 00000100 71 77 af 8b e2 45 ae 9c fb f7 b8 13 8e ed 85 35 qw...E.........5 00:22:29.475 00000110 bb e2 be 8c 7d 57 b2 e9 2a db d3 b6 3c 3e 9f 80 ....}W..*...<>.. 00:22:29.475 00000120 ca 50 6f 5b d0 ca 20 36 84 82 09 a0 3d ff c5 0c .Po[.. 6....=... 00:22:29.475 00000130 9a fc 9b 3f 58 f3 2d 6a 78 cb 5c 82 55 93 f7 99 ...?X.-jx.\.U... 00:22:29.475 00000140 b1 f1 a4 21 fa de 3d 92 d8 a2 f4 a4 21 c5 7e 96 ...!..=.....!.~. 00:22:29.475 00000150 a4 cd cf 10 25 4d a6 1e 77 dc 90 03 3e 47 a0 91 ....%M..w...>G.. 00:22:29.475 00000160 16 b1 dd 09 ce 92 18 34 72 77 a4 b2 ea 54 ad 29 .......4rw...T.) 00:22:29.475 00000170 cc d9 6f 44 35 0e 9c a9 f8 3c e6 45 c0 2d 38 86 ..oD5....<.E.-8. 00:22:29.475 00000180 77 0b e8 ea a4 72 4d aa 21 fb 3b 93 6a d5 aa de w....rM.!.;.j... 00:22:29.475 00000190 14 60 cf 54 56 f9 0c e7 a4 7f a2 f5 0b bd d4 15 .`.TV........... 00:22:29.475 000001a0 e2 94 2e 39 d5 3a e0 8f 79 b4 1d de 9e c8 7e 73 ...9.:..y.....~s 00:22:29.475 000001b0 47 f3 86 4b 31 6f 90 83 64 7b 75 e3 f9 e3 08 02 G..K1o..d{u..... 00:22:29.475 000001c0 b0 52 b9 ca a9 b1 6f 94 fc 79 da f9 db 96 1d 20 .R....o..y..... 00:22:29.475 000001d0 34 8a fd 3a cf ea be 03 b9 73 a2 68 d8 40 90 4b 4..:.....s.h.@.K 00:22:29.475 000001e0 52 5e 0a 8f 01 98 6e 92 7a 24 75 bd 70 44 7c e8 R^....n.z$u.pD|. 00:22:29.476 000001f0 ea 83 e5 60 fe ca 94 46 a6 52 d4 4a cb 20 92 9b ...`...F.R.J. .. 00:22:29.476 00000200 7c d7 c1 e9 cb 05 84 1c 49 33 cd 24 e4 6a d4 9c |.......I3.$.j.. 00:22:29.476 00000210 c7 b5 ae 1f 10 08 7a f7 a0 21 a9 14 ee a1 2a 79 ......z..!....*y 00:22:29.476 00000220 7e 09 01 53 e8 ed df 48 a1 b7 9e b6 15 8f e5 d8 ~..S...H........ 00:22:29.476 00000230 ca c4 5f 5e c4 17 ba ca 6d 27 e0 e8 06 cd 8e 75 .._^....m'.....u 00:22:29.476 00000240 87 ae 2d b7 52 c6 0a 8e 86 19 49 89 ae a4 19 bb ..-.R.....I..... 00:22:29.476 00000250 ce 85 89 f0 55 0b 65 e6 aa 7f 8e a0 19 1e f2 55 ....U.e........U 00:22:29.476 00000260 3e b1 52 7c 6c c4 de e6 45 6e ba 97 cd b6 24 c5 >.R|l...En....$. 00:22:29.476 00000270 b6 c9 14 d2 f5 c5 60 e6 ff 96 63 23 1e b0 46 1c ......`...c#..F. 00:22:29.476 00000280 36 2c d8 60 7a aa 17 13 13 27 1b b2 0a b3 70 dd 6,.`z....'....p. 00:22:29.476 00000290 32 14 bc 80 84 31 68 e7 67 b8 f1 67 ae c2 fa 41 2....1h.g..g...A 00:22:29.476 000002a0 ff 1a 6e 37 ce c3 cb 1c 36 b2 98 bd 2a 43 d5 87 ..n7....6...*C.. 00:22:29.476 000002b0 13 fd 60 cd a2 e1 74 08 1e 99 97 a4 c6 d1 77 1d ..`...t.......w. 00:22:29.476 000002c0 80 f5 ad 8c f9 b9 57 cf 79 e7 67 a5 97 80 9d c0 ......W.y.g..... 00:22:29.476 000002d0 1c b7 8c 2a 86 41 cc d1 59 43 75 58 4e 02 4d d8 ...*.A..YCuXN.M. 00:22:29.476 000002e0 53 ae 31 5b a4 97 01 81 04 3b 3a f7 08 09 61 00 S.1[.....;:...a. 00:22:29.476 000002f0 28 6a 06 3a 2b 70 4e 3c 62 d3 1c c5 fb 9b e0 3d (j.:+pNs.#... 00:22:29.476 00000160 78 81 31 64 cd 7b 16 9d 9e 5e da 69 f6 21 e5 e4 x.1d.{...^.i.!.. 00:22:29.476 00000170 7f 1e d8 8c a8 df 91 0b 08 f5 8b 58 94 97 a9 c4 ...........X.... 00:22:29.476 00000180 19 29 77 33 b8 f5 9a 7e 51 34 9c 65 8e 76 14 43 .)w3...~Q4.e.v.C 00:22:29.476 00000190 03 ea cb ab c2 a0 67 e6 31 e5 34 37 40 61 f8 a4 ......g.1.47@a.. 00:22:29.476 000001a0 e5 30 67 c0 d8 cd 44 a8 5a 81 28 cd 16 01 28 7b .0g...D.Z.(...({ 00:22:29.476 000001b0 92 28 06 3d c1 29 71 a9 22 b6 56 61 45 f0 49 53 .(.=.)q.".VaE.IS 00:22:29.476 000001c0 a8 db 51 03 8c 68 b2 f6 0f d2 ff 15 1a 30 a1 50 ..Q..h.......0.P 00:22:29.476 000001d0 2b b0 df f7 4e d4 86 ec 79 0b 84 0e a7 a8 ec 88 +...N...y....... 00:22:29.476 000001e0 69 e0 e3 23 f5 2c 4f d1 14 e7 d6 b6 d1 10 ca 65 i..#.,O........e 00:22:29.476 000001f0 bf 65 d4 15 9e 59 aa ab e1 65 cb 0c af 5e 79 b9 .e...Y...e...^y. 00:22:29.476 00000200 92 71 3b ef 01 ca 1a 87 d7 ff ae e7 ba c4 a8 16 .q;............. 00:22:29.476 00000210 f4 ac c0 95 23 35 04 15 50 a1 0a 34 f1 bf 8a 70 ....#5..P..4...p 00:22:29.476 00000220 6e 8c 9b be 0d 0a a7 59 a3 ec d9 77 b7 aa 65 98 n......Y...w..e. 00:22:29.476 00000230 6e 27 d5 a4 5b 33 f1 46 39 d0 71 17 e5 fe 14 72 n'..[3.F9.q....r 00:22:29.476 00000240 9a 75 b6 19 c6 7e c8 79 82 9e 55 ac 2c 0d 6d b3 .u...~.y..U.,.m. 00:22:29.476 00000250 d0 8b 34 1f a4 f2 f0 64 16 99 f6 cb d1 bc 71 2c ..4....d......q, 00:22:29.476 00000260 81 ec 6f de 4a 47 df ca 20 4d 62 02 6e 1e b9 28 ..o.JG.. Mb.n..( 00:22:29.476 00000270 c1 93 c9 f0 87 33 31 de 14 51 80 32 16 f5 86 21 .....31..Q.2...! 00:22:29.476 00000280 dd bf 22 23 22 8b f4 66 32 76 00 7c b9 08 43 0b .."#"..f2v.|..C. 00:22:29.476 00000290 09 65 c6 c9 d9 ff 3b ce f5 70 50 a3 30 ef 99 11 .e....;..pP.0... 00:22:29.476 000002a0 df d5 c6 ed 46 9f d9 1d ff 75 41 a9 de 24 56 8d ....F....uA..$V. 00:22:29.476 000002b0 20 1f 5b 9b 00 4f 7e 50 9a 1e 9f 8a 94 56 37 86 .[..O~P.....V7. 00:22:29.476 000002c0 93 e6 8f 7b d2 5f 0f 82 8c 03 6a 12 9e 1a 8f f3 ...{._....j..... 00:22:29.476 000002d0 39 71 92 6b 1a a0 ce 44 06 74 14 af ac 5a 1b 61 9q.k...D.t...Z.a 00:22:29.476 000002e0 e0 da d3 73 83 3f f0 97 00 d5 e5 52 87 82 81 aa ...s.?.....R.... 00:22:29.476 000002f0 a7 6c e8 37 3b e8 dd 94 bf 3f 92 38 65 6f 16 4e .l.7;....?.8eo.N 00:22:29.476 00000300 15 0c 48 f9 27 5e 21 05 4c 12 a7 9e af 98 10 bb ..H.'^!.L....... 00:22:29.476 00000310 b9 96 42 e2 41 1c 99 6b a2 b7 b4 15 8e 4b 2b 2e ..B.A..k.....K+. 00:22:29.476 00000320 18 e1 e3 93 1b c1 b6 56 d8 be fb 94 7e 4a 07 03 .......V....~J.. 00:22:29.476 00000330 ce a2 f9 1a fe 53 d0 8c 1e 4f ba 91 99 12 76 f0 .....S...O....v. 00:22:29.476 00000340 74 37 55 fd e1 03 4d 82 66 f7 ea b0 9f ba 32 32 t7U...M.f.....22 00:22:29.476 00000350 20 97 12 52 44 31 23 25 c8 25 c0 3d 8f 36 aa 82 ..RD1#%.%.=.6.. 00:22:29.476 00000360 31 85 24 03 c4 5a 26 3d d5 9f 3b 4c 36 8e ed 82 1.$..Z&=..;L6... 00:22:29.476 00000370 15 5e 80 3b 13 a1 9e a0 c7 bc 32 11 8d cd 49 c9 .^.;......2...I. 00:22:29.476 00000380 d2 6d df 9f 66 0a 42 56 7a 11 83 fc 80 c6 42 c7 .m..f.BVz.....B. 00:22:29.476 00000390 35 99 08 48 14 73 fe e6 11 aa df 38 53 39 6b 8a 5..H.s.....8S9k. 00:22:29.476 000003a0 64 f7 38 c7 e6 68 f1 11 99 84 2e 21 a3 fb 14 60 d.8..h.....!...` 00:22:29.476 000003b0 b0 d6 e0 5d c5 b9 62 f7 4c 9a ad b4 6b 97 48 9b ...]..b.L...k.H. 00:22:29.476 000003c0 6a 94 45 bd 8a 1b 6a ef 87 e7 67 32 6b d6 3e 11 j.E...j...g2k.>. 00:22:29.476 000003d0 8d 45 e2 13 95 b3 b7 ef 78 32 aa f6 7a eb c9 99 .E......x2..z... 00:22:29.476 000003e0 c6 0f 48 35 ce e3 e4 bd 8a 91 35 a9 77 fc 4c a2 ..H5......5.w.L. 00:22:29.476 000003f0 21 4e 7e af 7c 42 d3 7d 75 ed 9d 4f 64 d1 45 25 !N~.|B.}u..Od.E% 00:22:29.476 dh secret: 00:22:29.476 00000000 e6 c9 9c 3e 79 a7 50 fe 1d 53 40 d3 01 ea 74 e1 ...>y.P..S@...t. 00:22:29.476 00000010 28 33 d7 db 66 b1 68 b8 a5 78 1c ba f5 6c 52 49 (3..f.h..x...lRI 00:22:29.476 00000020 ac cc 0e 0e 2c 8c 1f d8 65 06 e1 48 15 73 40 5c ....,...e..H.s@\ 00:22:29.476 00000030 16 94 36 28 bc e0 ea 12 bf c0 37 9d 37 e6 2d 5e ..6(......7.7.-^ 00:22:29.476 00000040 02 e5 91 70 74 7b 86 5d 5e 1d 30 8e 68 cc c4 d9 ...pt{.]^.0.h... 00:22:29.476 00000050 37 03 24 cd ad 76 8a c8 2b e9 b5 ed 95 04 f5 c3 7.$..v..+....... 00:22:29.476 00000060 54 c9 67 71 5b 1a 27 86 1b fd 9a f6 ab 15 41 4e T.gq[.'.......AN 00:22:29.476 00000070 12 4b d1 15 cf 70 4c 4e 76 0f 61 c1 64 d4 8c ea .K...pLNv.a.d... 00:22:29.476 00000080 09 ed e1 71 65 bf e3 0e aa 6a 60 ec 1c 8b 09 ae ...qe....j`..... 00:22:29.476 00000090 6a 0a a6 1b a5 1f 1d 0f 3f 17 9b 4b eb 15 5e 3b j.......?..K..^; 00:22:29.476 000000a0 84 2d cc 63 7e 41 99 76 f9 ee 2a 16 a3 6f 4d 79 .-.c~A.v..*..oMy 00:22:29.476 000000b0 7a 3e e7 eb da 1c c9 36 05 00 2f 07 23 ed 9b 08 z>.....6../.#... 00:22:29.476 000000c0 32 77 af 54 3a d5 d5 cc 84 08 c1 ce c2 e6 e3 ec 2w.T:........... 00:22:29.476 000000d0 9b 26 c9 30 20 c0 a4 92 41 6c b7 9d 1b c3 38 58 .&.0 ...Al....8X 00:22:29.476 000000e0 6b 76 fc 2d a8 b3 81 ae 0b ba f6 4a d0 e9 01 25 kv.-.......J...% 00:22:29.476 000000f0 15 09 4d 73 9c f9 cc ed dd 4d 89 66 3f cf a8 ba ..Ms.....M.f?... 00:22:29.476 00000100 61 26 22 ba 83 42 21 5e af 26 98 ce 63 bd af 4e a&"..B!^.&..c..N 00:22:29.476 00000110 be dc 87 48 99 e8 76 7c e1 3f 91 11 49 14 37 61 ...H..v|.?..I.7a 00:22:29.476 00000120 5c a6 90 fe 72 52 7e 9b 1d 0b a9 68 1f b1 7e be \...rR~....h..~. 00:22:29.476 00000130 26 a6 02 31 8e c9 79 e4 d3 67 e2 f2 cd df 8b 97 &..1..y..g...... 00:22:29.476 00000140 32 99 c6 25 55 48 97 ad cd c8 26 90 b9 d5 7f 45 2..%UH....&....E 00:22:29.476 00000150 28 e9 25 d5 2e 9b 53 4c 96 cb 80 2f 67 00 ea c5 (.%...SL.../g... 00:22:29.476 00000160 e7 dd 63 90 61 d2 54 c9 6e 1e e6 e2 06 ff 11 27 ..c.a.T.n......' 00:22:29.476 00000170 26 b7 fe e8 8b 3b 97 ac 3a b0 15 9e 94 ee c1 22 &....;..:......" 00:22:29.476 00000180 12 a0 5c 53 06 53 90 b6 43 a9 cf 50 f3 44 b3 d7 ..\S.S..C..P.D.. 00:22:29.476 00000190 19 10 5c d1 82 b5 fc 3f 44 11 e0 31 2b e0 99 8a ..\....?D..1+... 00:22:29.476 000001a0 30 14 a1 04 11 68 44 7c a1 0b d9 a3 a9 dd 5c 63 0....hD|......\c 00:22:29.476 000001b0 ce c2 84 cc a1 06 46 cf e2 b8 4b 80 81 bf 06 42 ......F...K....B 00:22:29.476 000001c0 e1 7d 78 63 65 13 6f 60 c4 3f f8 b7 d6 1a f8 b4 .}xce.o`.?...... 00:22:29.476 000001d0 2d 31 90 dd 61 88 71 83 ee a2 fd 73 c5 74 8b 22 -1..a.q....s.t." 00:22:29.476 000001e0 f7 e7 ed 6b 2b 64 34 8a b3 8f 53 c5 6b 81 24 0a ...k+d4...S.k.$. 00:22:29.476 000001f0 23 86 82 09 6d 72 6f 43 80 73 41 eb 0f c9 ad 36 #...mroC.sA....6 00:22:29.476 00000200 13 dd 8e 34 13 0c 8b f9 2c 83 81 8d 12 fa dd e6 ...4....,....... 00:22:29.476 00000210 ca 8a 73 5c d2 9e 50 07 06 d3 ea 55 e9 25 3f c8 ..s\..P....U.%?. 00:22:29.476 00000220 45 c9 f5 43 f1 f9 cd e3 d4 47 2c 21 3c 1a 7d 89 E..C.....G,!<.}. 00:22:29.476 00000230 b5 c7 9b cb 03 d7 69 6f 70 6e 52 e1 08 32 b9 a3 ......iopnR..2.. 00:22:29.476 00000240 fb 5e 2a 41 7e ae d6 ce 0b 4e 63 3d a8 13 f3 40 .^*A~....Nc=...@ 00:22:29.476 00000250 fb 01 81 ce 46 c6 13 6b 30 31 ff e7 87 e9 79 64 ....F..k01....yd 00:22:29.476 00000260 04 d6 b6 1c 2f f6 66 1f d7 8c 3c 1c e5 57 88 77 ..../.f...<..W.w 00:22:29.476 00000270 08 0c 8a 8a ed e3 99 b2 8a c6 10 ed 60 7d 78 7e ............`}x~ 00:22:29.476 00000280 dc 82 76 95 72 5b f1 14 ba 3b ef 09 cf 60 fa a6 ..v.r[...;...`.. 00:22:29.476 00000290 b7 d1 ae a3 c7 97 15 7d 3b 97 02 df d8 be 08 c8 .......};....... 00:22:29.476 000002a0 91 f5 58 af fc 7e 51 7a bd 8a 92 5c c7 cc 8a bc ..X..~Qz...\.... 00:22:29.476 000002b0 e3 c6 a6 de 3f b7 36 55 31 df 27 9f 5f fc ca 45 ....?.6U1.'._..E 00:22:29.476 000002c0 73 a6 76 e3 fc db 87 f0 ef 9e 54 ff 77 1b 63 8e s.v.......T.w.c. 00:22:29.476 000002d0 ff ce 2b 9c 5a f5 d9 dc 66 55 c8 fd 2e 92 4b a6 ..+.Z...fU....K. 00:22:29.476 000002e0 58 d9 f4 71 74 63 fc 1b c4 76 00 5d 01 ee c8 c1 X..qtc...v.].... 00:22:29.476 000002f0 d8 9a 36 2c c8 88 51 a6 17 bd 4c 0a 46 ae fe 03 ..6,..Q...L.F... 00:22:29.476 00000300 81 65 13 bb 79 f7 30 29 3d 17 4e 4c 84 7c cd fc .e..y.0)=.NL.|.. 00:22:29.476 00000310 16 bc 9d 3b 19 14 da 06 32 10 df 76 2f e0 15 76 ...;....2..v/..v 00:22:29.476 00000320 8c b5 8a 6f 63 70 12 df 84 30 c6 72 97 64 74 98 ...ocp...0.r.dt. 00:22:29.476 00000330 4c d5 ef c9 b9 b1 5d 60 1b a7 90 1e 28 8c f4 59 L.....]`....(..Y 00:22:29.476 00000340 af f0 95 3e c1 7a 99 84 6b d9 9f 05 6e 46 78 26 ...>.z..k...nFx& 00:22:29.477 00000350 e5 ba 6e f2 7d be 62 2e e2 47 ee ad 07 56 b6 93 ..n.}.b..G...V.. 00:22:29.477 00000360 46 f9 51 42 4d 0c b5 5a 51 44 d8 cf 71 97 bf 40 F.QBM..ZQD..q..@ 00:22:29.477 00000370 c7 18 71 66 0d c2 21 18 7c c0 54 2d d2 4d 39 93 ..qf..!.|.T-.M9. 00:22:29.477 00000380 b4 c2 ff b2 30 74 58 75 56 d1 50 84 1b ab 05 90 ....0tXuV.P..... 00:22:29.477 00000390 db b0 8f c8 e4 c2 e8 e5 4a 66 69 19 36 84 dd b9 ........Jfi.6... 00:22:29.477 000003a0 07 f9 97 86 0a e1 fe d5 6d eb 48 37 b6 d2 9e 29 ........m.H7...) 00:22:29.477 000003b0 01 60 e1 5d f0 87 5c 07 c7 95 04 dd 12 d8 f0 bd .`.]..\......... 00:22:29.477 000003c0 f3 af 86 ec b1 0f c1 b8 ea c4 29 b6 c0 49 d0 86 ..........)..I.. 00:22:29.477 000003d0 54 04 98 35 60 88 53 ff 1a 8d 4e 82 45 69 15 7e T..5`.S...N.Ei.~ 00:22:29.477 000003e0 9c 4a 7d 6c 9c f4 53 1a 9c 27 f7 09 0b be 03 9c .J}l..S..'...... 00:22:29.477 000003f0 8e 87 f1 01 91 07 97 c4 b4 97 68 19 a6 f6 0c 68 ..........h....h 00:22:29.477 [2024-09-27 13:27:10.599846] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key2, hash=1, dhgroup=5, seq=3775755218, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.477 [2024-09-27 13:27:10.600392] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.477 [2024-09-27 13:27:10.694251] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.477 [2024-09-27 13:27:10.694800] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.477 [2024-09-27 13:27:10.695012] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.477 [2024-09-27 13:27:10.695489] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.477 [2024-09-27 13:27:10.865794] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.477 [2024-09-27 13:27:10.866073] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.477 [2024-09-27 13:27:10.866352] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 5 (ffdhe8192) 00:22:29.477 [2024-09-27 13:27:10.866600] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.477 [2024-09-27 13:27:10.866909] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.477 ctrlr pubkey: 00:22:29.477 00000000 89 c5 6c e4 9a f0 c8 4f d1 86 f3 71 33 24 d9 ef ..l....O...q3$.. 00:22:29.477 00000010 51 6b 0c 48 c5 e8 9e 92 20 d6 bd 8f c5 14 dd 9e Qk.H.... ....... 00:22:29.477 00000020 2d c6 9f ac 1b f2 db 25 ae 4f b7 2f 9a 73 f5 56 -......%.O./.s.V 00:22:29.477 00000030 d9 c6 70 b5 19 e1 03 4c c1 9d d6 34 7f 1b f8 33 ..p....L...4...3 00:22:29.477 00000040 7c 15 90 36 1b 99 5b 9e c3 89 20 00 e1 ea 0f 5d |..6..[... ....] 00:22:29.477 00000050 21 73 f0 da 94 d4 8d e9 a0 f3 bc 61 df a6 7d ad !s.........a..}. 00:22:29.477 00000060 4f 2f dd 44 0b ef 36 da 02 37 68 c1 44 a2 c7 a2 O/.D..6..7h.D... 00:22:29.477 00000070 6c f0 13 b2 30 ea fa f9 be 1a fa 72 35 f0 20 ec l...0......r5. . 00:22:29.477 00000080 30 5f a0 97 36 4c 3d 99 52 4a 6d c9 21 10 b0 6a 0_..6L=.RJm.!..j 00:22:29.477 00000090 87 91 7f 54 b7 dc ee ee 06 8d e0 23 82 d3 4c f4 ...T.......#..L. 00:22:29.477 000000a0 63 68 05 cd 98 00 cc 1e 58 87 76 00 c8 a5 a4 99 ch......X.v..... 00:22:29.477 000000b0 fb e4 64 65 8f cb 24 2f 66 b4 54 6d c0 a9 c2 42 ..de..$/f.Tm...B 00:22:29.477 000000c0 93 19 07 24 2e 7a 98 62 5e bb b4 e1 2b d4 cc f0 ...$.z.b^...+... 00:22:29.477 000000d0 35 d3 5a 61 f8 ce 29 d1 0d 09 dc d2 99 dc 06 88 5.Za..)......... 00:22:29.477 000000e0 4f 5c 93 a4 8a 56 23 af c2 05 43 5b 8c aa e6 08 O\...V#...C[.... 00:22:29.477 000000f0 9d 81 b7 83 83 83 6c aa 1a c3 c3 d2 e7 1c 51 e4 ......l.......Q. 00:22:29.477 00000100 35 35 be 92 b3 48 f4 b3 56 bd 4f 7a ed 17 8e ef 55...H..V.Oz.... 00:22:29.477 00000110 ca 67 fb ed 3d 53 cc 8b f7 11 de 3a f9 94 f9 e2 .g..=S.....:.... 00:22:29.477 00000120 9c a4 44 58 75 47 71 ce 15 ec 31 4e 08 c1 f3 bc ..DXuGq...1N.... 00:22:29.477 00000130 01 32 e3 85 f2 45 03 9f eb e5 73 91 e6 e3 4d 82 .2...E....s...M. 00:22:29.477 00000140 41 82 10 40 ed 31 3f 97 be 87 57 bb 2c e1 22 12 A..@.1?...W.,.". 00:22:29.477 00000150 f7 1e d8 1b ce 19 c5 77 7b 6e 59 86 89 3c ef e2 .......w{nY..<.. 00:22:29.477 00000160 26 d5 82 2e a5 b0 b3 ea 73 c7 a9 ca ad 90 7b 78 &.......s.....{x 00:22:29.477 00000170 8e 40 0d f9 fa 06 b9 a4 02 91 20 75 4d 71 34 96 .@........ uMq4. 00:22:29.477 00000180 66 09 1e 02 26 f2 1f 7b 42 68 f9 75 eb 17 b8 ba f...&..{Bh.u.... 00:22:29.477 00000190 59 bb fa e2 f4 18 73 67 0f 64 83 41 bd a5 b6 aa Y.....sg.d.A.... 00:22:29.477 000001a0 89 ef 10 cf 81 84 71 6e dd ac b5 9b 60 76 fb f8 ......qn....`v.. 00:22:29.477 000001b0 13 16 da f2 e5 03 de e8 bf ff 71 34 7a a0 23 26 ..........q4z.#& 00:22:29.477 000001c0 7b d9 5b 3a 27 25 8d 51 b6 63 6e 38 17 99 ec 85 {.[:'%.Q.cn8.... 00:22:29.477 000001d0 71 8d 51 ca 50 d3 d0 1f dc 5a 1c 23 b6 ed e4 ef q.Q.P....Z.#.... 00:22:29.477 000001e0 58 35 8e 84 e2 fd 81 e6 d8 29 f9 d2 94 eb 24 20 X5.......)....$ 00:22:29.477 000001f0 10 c3 2e 01 75 39 9e 2f 24 0b 0a b7 a9 3a ab 02 ....u9./$....:.. 00:22:29.477 00000200 f3 e0 81 b4 2e 96 12 2e f9 a7 6b 4a 96 cc b0 fb ..........kJ.... 00:22:29.477 00000210 79 5a dc 19 80 d7 e5 9a 0f 60 82 03 ad af 17 8b yZ.......`...... 00:22:29.477 00000220 db 3b 58 65 78 4c d9 30 7c a1 7a 3c 6b d0 d8 b7 .;XexL.0|.z 00:22:29.477 00000240 94 16 b5 f7 82 6a cb d8 92 c3 fd da 16 f7 f4 22 .....j........." 00:22:29.477 00000250 17 8c 1c 4b 02 6f 2d 7e 8a 95 4a 4d 00 c8 43 5e ...K.o-~..JM..C^ 00:22:29.477 00000260 55 98 3c 42 32 9e 7f 7a 10 c0 1c 5d a9 be 44 a0 U.}F;..@.... 00:22:29.477 000000d0 3e 26 bd 46 c9 cb ad 7d ce 38 0a 34 6e 14 c5 7c >&.F...}.8.4n..| 00:22:29.477 000000e0 ac 1e 71 55 4e 06 7e fe 25 65 c8 c0 d1 c6 e8 1e ..qUN.~.%e...... 00:22:29.477 000000f0 f7 f0 d6 32 c8 b6 c9 29 d9 6e 5b ec a6 de 95 58 ...2...).n[....X 00:22:29.477 00000100 1f a7 fc fd 02 a8 13 20 83 d7 11 93 94 49 1e 81 ....... .....I.. 00:22:29.477 00000110 c6 28 37 2b 19 32 d1 1b 50 87 d8 36 b0 63 0a be .(7+.2..P..6.c.. 00:22:29.477 00000120 37 b5 c6 f1 c8 9d 32 b8 fd 59 b2 be 01 96 f4 2d 7.....2..Y.....- 00:22:29.477 00000130 1a 50 de 85 0c 43 17 12 b1 7b 8d 82 a5 23 f8 dc .P...C...{...#.. 00:22:29.477 00000140 80 da 5b c2 7b 02 d5 eb 26 d3 ad f8 ad 85 b2 d5 ..[.{...&....... 00:22:29.477 00000150 d2 da d2 b0 f7 c6 22 d0 f5 59 2b 01 ea 61 88 b5 ......"..Y+..a.. 00:22:29.477 00000160 c2 e3 40 e8 50 55 2c 2e 99 e4 9c d9 fa 3f 56 9a ..@.PU,......?V. 00:22:29.477 00000170 1c de 9d 7c 96 3d ca 39 ad aa 79 2f 2d 08 5e 6d ...|.=.9..y/-.^m 00:22:29.477 00000180 7d bd 75 7a 31 de ef eb 29 5e 7f 60 75 d1 52 67 }.uz1...)^.`u.Rg 00:22:29.477 00000190 a4 05 d6 49 e8 24 2a 66 0c ff 75 4f 27 90 f5 7d ...I.$*f..uO'..} 00:22:29.477 000001a0 ff ac 3c 23 07 fb ae a7 f6 a2 cc e9 b1 fc e0 a3 ..<#............ 00:22:29.477 000001b0 09 38 ec d4 2b 30 3b 58 7b a8 de 12 5b 70 32 30 .8..+0;X{...[p20 00:22:29.477 000001c0 c8 5e f2 fb 05 d1 20 a9 03 64 33 d2 46 ce bd 61 .^.... ..d3.F..a 00:22:29.477 000001d0 58 90 2b 0f 51 08 1d ac 20 45 08 2e d2 fc 92 9e X.+.Q... E...... 00:22:29.477 000001e0 4a d5 7f 6d f7 c1 a3 d0 5d cc 52 ab 10 01 92 57 J..m....].R....W 00:22:29.477 000001f0 c2 5c 9b 46 86 bf 1b 03 ee fe 1a 5d d9 4e 59 fe .\.F.......].NY. 00:22:29.477 00000200 ec 2a 25 34 14 df 15 19 9b 1c 55 3c 97 fe 79 08 .*%4......U<..y. 00:22:29.477 00000210 27 35 2d ed 08 98 56 7f e4 91 36 2b 2e ed 56 0a '5-...V...6+..V. 00:22:29.477 00000220 f6 2d 92 5e ce bc da 59 c4 b5 14 93 54 dc b7 94 .-.^...Y....T... 00:22:29.477 00000230 08 0e 45 52 84 29 0e d0 8f 5f 7e 9e 89 f8 56 ad ..ER.)..._~...V. 00:22:29.477 00000240 e6 ca af 55 32 ae 5e 39 cf db 1b f4 b5 fc ec 7f ...U2.^9........ 00:22:29.477 00000250 9e a3 b1 17 ca 84 7f ca 4e b1 88 d7 28 4c f7 0a ........N...(L.. 00:22:29.477 00000260 79 73 72 19 4f ed 41 2e dd 14 57 96 fb 00 8e 1e ysr.O.A...W..... 00:22:29.477 00000270 ab 0a 74 fd a7 2a 5b 7b 67 34 84 6d 50 17 4e c0 ..t..*[{g4.mP.N. 00:22:29.477 00000280 74 22 11 a3 82 45 e6 24 34 07 b0 a3 73 b9 7f d0 t"...E.$4...s... 00:22:29.477 00000290 36 54 65 ef a3 8f aa 17 19 ca 93 06 48 9f 2d b2 6Te.........H.-. 00:22:29.477 000002a0 ca b9 7c 8a a1 18 e2 ed a6 a4 13 53 34 b1 fc 78 ..|........S4..x 00:22:29.477 000002b0 2c 3c 77 6e e7 16 1d 01 85 c5 e0 dc eb fd 92 60 ,anNp.<....o. 00:22:29.478 000003e0 71 c8 72 ab c2 61 8f 70 62 93 1e 2b c1 83 02 3b q.r..a.pb..+...; 00:22:29.478 000003f0 af 02 73 ac 40 d6 a0 db e3 14 78 70 83 ba 62 2e ..s.@.....xp..b. 00:22:29.478 dh secret: 00:22:29.478 00000000 61 96 bb b2 bc ae 55 6f 02 77 11 89 8e 4c a1 ad a.....Uo.w...L.. 00:22:29.478 00000010 34 ba d9 8f 30 57 2c b0 90 d4 e2 b8 0f 8a ce 9a 4...0W,......... 00:22:29.478 00000020 88 a6 35 9d 56 3f 54 1d 0e a3 31 da ed 7e eb e0 ..5.V?T...1..~.. 00:22:29.478 00000030 0b 25 15 ff 35 25 5d 44 30 b0 e5 ad 62 f1 de dd .%..5%]D0...b... 00:22:29.478 00000040 c9 c2 a2 6d e3 41 54 03 eb 8e 3b d6 29 6b ac 3e ...m.AT...;.)k.> 00:22:29.478 00000050 c7 82 0e 25 d9 27 a4 6e 58 a0 b3 cc e2 56 9c 53 ...%.'.nX....V.S 00:22:29.478 00000060 3f 37 c0 fd 12 26 73 f3 09 f4 9c c4 a7 53 b8 42 ?7...&s......S.B 00:22:29.478 00000070 d6 56 39 c0 47 45 5d 31 d5 df 53 26 0b d0 cf 7e .V9.GE]1..S&...~ 00:22:29.478 00000080 45 11 28 23 35 48 d4 64 7d 26 49 90 00 87 f7 2b E.(#5H.d}&I....+ 00:22:29.478 00000090 e2 2b e4 8b a9 2d 04 04 1c f7 d6 8c db 35 17 68 .+...-.......5.h 00:22:29.478 000000a0 5c 78 0d f9 12 84 9e 67 26 e5 d0 98 f2 a7 05 44 \x.....g&......D 00:22:29.478 000000b0 e7 4d a2 ca 25 34 b4 5f 24 a5 16 5b 57 4c 90 54 .M..%4._$..[WL.T 00:22:29.478 000000c0 c1 75 39 cd d5 55 48 1d 29 8b f8 5b 38 48 b1 11 .u9..UH.)..[8H.. 00:22:29.478 000000d0 f1 7d fd 11 aa 18 b9 22 1c 93 2b 54 59 0d 2f 65 .}....."..+TY./e 00:22:29.478 000000e0 a4 b0 ad 7c 7c cc f3 14 80 8d a7 c4 dd 98 22 e7 ...||.........". 00:22:29.478 000000f0 46 9f 52 20 e0 3c d2 04 bb 98 f5 93 4a 56 7b c5 F.R .<......JV{. 00:22:29.478 00000100 ef d4 0b ea d9 a0 87 1b 26 6c 73 ea 43 43 29 3a ........&ls.CC): 00:22:29.478 00000110 7a 3c b4 e1 c7 8d 40 92 96 4d 57 88 2f d9 c9 5b z<....@..MW./..[ 00:22:29.478 00000120 e8 89 01 ec 5a 54 87 1a 84 dc c8 93 da b8 cc 52 ....ZT.........R 00:22:29.478 00000130 74 e5 04 5d d8 87 f6 cb cd ed fa a0 b6 59 0e 1e t..].........Y.. 00:22:29.478 00000140 63 79 bd eb 28 00 a1 87 5d 3b 7d e9 e1 af f7 c5 cy..(...];}..... 00:22:29.478 00000150 8b 1a 53 1c 4d fd e5 18 be a5 3e 81 4a 3f 83 0b ..S.M.....>.J?.. 00:22:29.478 00000160 ca bb 03 fd 43 35 db b7 73 e5 1d fd d3 a7 3b cf ....C5..s.....;. 00:22:29.478 00000170 77 c3 b1 b3 d7 41 e7 dd 79 74 30 e3 ce 92 24 5b w....A..yt0...$[ 00:22:29.478 00000180 21 a4 e9 08 05 50 76 41 49 b7 a8 89 37 16 7e 3b !....PvAI...7.~; 00:22:29.478 00000190 b0 98 73 17 06 d0 60 28 39 af 10 4e 08 1c 5d df ..s...`(9..N..]. 00:22:29.478 000001a0 b0 a5 ee 20 dc ee d8 22 34 1e 69 e4 5f b8 9d 4a ... ..."4.i._..J 00:22:29.478 000001b0 18 f3 a5 a3 ff 3a 08 11 85 11 8a 83 08 a1 eb 18 .....:.......... 00:22:29.478 000001c0 3c 1a 65 af bc f8 1e da 58 c1 04 81 b7 57 0f 81 <.e.....X....W.. 00:22:29.478 000001d0 04 ef 81 99 30 71 28 c9 bd 7b df 14 d7 f4 c2 3c ....0q(..{.....< 00:22:29.478 000001e0 00 42 0f b9 98 66 a0 ba 2f dd 88 06 ec c2 51 13 .B...f../.....Q. 00:22:29.478 000001f0 3d 3f af e2 2e 1b bd d5 84 2d 51 c1 a7 53 88 ae =?.......-Q..S.. 00:22:29.478 00000200 c8 87 d9 87 44 35 f6 02 69 4c 03 ed 47 1b 2d d8 ....D5..iL..G.-. 00:22:29.478 00000210 1d 7c 51 dd 50 18 9c 27 47 47 a0 23 72 9e b9 14 .|Q.P..'GG.#r... 00:22:29.478 00000220 18 25 88 a7 0a a3 80 18 8b 1c 12 82 7c c4 22 d1 .%..........|.". 00:22:29.478 00000230 95 a1 5c 60 80 bf 6c 78 64 c9 7b f2 ff 37 be 92 ..\`..lxd.{..7.. 00:22:29.478 00000240 a0 be f4 f4 67 ca f0 d5 d6 a6 56 84 60 36 d7 af ....g.....V.`6.. 00:22:29.478 00000250 25 39 36 2b cb 37 88 83 d6 e7 51 ad 3e 40 d5 81 %96+.7....Q.>@.. 00:22:29.478 00000260 82 e8 d9 5a c4 62 85 fb 90 dc 93 cf f2 18 13 04 ...Z.b.......... 00:22:29.478 00000270 db 7e 6d 14 79 07 b2 f3 13 90 cd 19 75 b6 4e a4 .~m.y.......u.N. 00:22:29.478 00000280 42 7d d7 62 e4 80 30 1f 3e 09 39 bc 01 d9 d2 6e B}.b..0.>.9....n 00:22:29.478 00000290 dc b9 20 40 db ff 2b 25 87 c8 a5 14 a6 c1 ee e4 .. @..+%........ 00:22:29.478 000002a0 49 e8 88 52 21 68 81 ba be f5 be 34 ad d1 d9 ae I..R!h.....4.... 00:22:29.478 000002b0 c0 f0 71 5f 71 18 f4 b0 ad 4f f5 f2 3d 89 1d f0 ..q_q....O..=... 00:22:29.478 000002c0 32 56 0a 23 04 5d 0f d6 fe e0 36 96 42 cb 97 82 2V.#.]....6.B... 00:22:29.478 000002d0 d0 17 73 ed 1d af b5 a5 3f 51 cd 1b 84 24 f9 3a ..s.....?Q...$.: 00:22:29.478 000002e0 d8 cf db 04 82 5e e5 a9 13 6d 94 42 7e af 60 64 .....^...m.B~.`d 00:22:29.478 000002f0 53 cf 8b be a0 7e c5 9d 3d 3c 08 58 96 fa cd 7a S....~..=<.X...z 00:22:29.478 00000300 74 1e ae 2d de 5e c4 d2 22 c6 7d 0e 66 a6 8a 8f t..-.^..".}.f... 00:22:29.478 00000310 3b 9a b4 ad 7d 1a 86 cd 09 11 4c b8 6c d2 66 12 ;...}.....L.l.f. 00:22:29.478 00000320 0c 90 80 ab 32 c2 ab eb bc 60 a0 92 84 d2 b4 e9 ....2....`...... 00:22:29.478 00000330 76 1d 24 06 c6 78 1a 64 db ff b3 4a f3 9a 69 bd v.$..x.d...J..i. 00:22:29.478 00000340 21 27 72 59 2c 16 20 ff 15 3c 5f ab c3 1e 52 ed !'rY,. ..<_...R. 00:22:29.478 00000350 16 6e 3c 27 2a 72 a2 71 00 08 68 08 2f 8e 18 e9 .n<'*r.q..h./... 00:22:29.478 00000360 54 dd 20 b5 fe df ec 96 00 ee 88 fe 51 04 67 e7 T. .........Q.g. 00:22:29.478 00000370 d8 2e 8c a0 e2 4d 34 88 38 0f 9f fd 9a d9 dc e7 .....M4.8....... 00:22:29.478 00000380 33 c2 34 38 ee 16 54 d9 ce 4b 59 5a d4 55 19 db 3.48..T..KYZ.U.. 00:22:29.478 00000390 13 3f 0d 17 ec 28 b1 70 d0 6c fd 6b fd d1 a4 83 .?...(.p.l.k.... 00:22:29.478 000003a0 85 50 a6 85 b3 89 b9 71 af c0 7e b2 a4 80 68 f4 .P.....q..~...h. 00:22:29.478 000003b0 e1 fb 3e 5e e3 07 71 dd 92 6e 95 35 12 d7 40 a4 ..>^..q..n.5..@. 00:22:29.478 000003c0 69 fe d4 62 34 65 94 53 b9 95 62 a0 2f 16 09 e4 i..b4e.S..b./... 00:22:29.478 000003d0 dc 7a 45 67 97 1d 88 4a bd fb e7 4d c8 e4 36 bd .zEg...J...M..6. 00:22:29.478 000003e0 d1 76 99 a6 33 48 0d 14 ad 22 32 43 e3 5d ed af .v..3H..."2C.].. 00:22:29.478 000003f0 4d 71 62 5c f7 b1 bc 44 b3 47 75 ab 47 4d 89 22 Mqb\...D.Gu.GM." 00:22:29.478 [2024-09-27 13:27:11.026426] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key3, hash=1, dhgroup=5, seq=3775755219, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.478 [2024-09-27 13:27:11.026894] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.478 [2024-09-27 13:27:11.110453] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.478 [2024-09-27 13:27:11.110976] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.478 [2024-09-27 13:27:11.111209] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.478 [2024-09-27 13:27:11.111599] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.478 [2024-09-27 13:27:11.163486] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.478 [2024-09-27 13:27:11.163784] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.478 [2024-09-27 13:27:11.163902] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 5 (ffdhe8192) 00:22:29.478 [2024-09-27 13:27:11.164123] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.478 [2024-09-27 13:27:11.164449] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.478 ctrlr pubkey: 00:22:29.478 00000000 89 c5 6c e4 9a f0 c8 4f d1 86 f3 71 33 24 d9 ef ..l....O...q3$.. 00:22:29.478 00000010 51 6b 0c 48 c5 e8 9e 92 20 d6 bd 8f c5 14 dd 9e Qk.H.... ....... 00:22:29.478 00000020 2d c6 9f ac 1b f2 db 25 ae 4f b7 2f 9a 73 f5 56 -......%.O./.s.V 00:22:29.478 00000030 d9 c6 70 b5 19 e1 03 4c c1 9d d6 34 7f 1b f8 33 ..p....L...4...3 00:22:29.478 00000040 7c 15 90 36 1b 99 5b 9e c3 89 20 00 e1 ea 0f 5d |..6..[... ....] 00:22:29.478 00000050 21 73 f0 da 94 d4 8d e9 a0 f3 bc 61 df a6 7d ad !s.........a..}. 00:22:29.478 00000060 4f 2f dd 44 0b ef 36 da 02 37 68 c1 44 a2 c7 a2 O/.D..6..7h.D... 00:22:29.478 00000070 6c f0 13 b2 30 ea fa f9 be 1a fa 72 35 f0 20 ec l...0......r5. . 00:22:29.478 00000080 30 5f a0 97 36 4c 3d 99 52 4a 6d c9 21 10 b0 6a 0_..6L=.RJm.!..j 00:22:29.478 00000090 87 91 7f 54 b7 dc ee ee 06 8d e0 23 82 d3 4c f4 ...T.......#..L. 00:22:29.478 000000a0 63 68 05 cd 98 00 cc 1e 58 87 76 00 c8 a5 a4 99 ch......X.v..... 00:22:29.478 000000b0 fb e4 64 65 8f cb 24 2f 66 b4 54 6d c0 a9 c2 42 ..de..$/f.Tm...B 00:22:29.478 000000c0 93 19 07 24 2e 7a 98 62 5e bb b4 e1 2b d4 cc f0 ...$.z.b^...+... 00:22:29.478 000000d0 35 d3 5a 61 f8 ce 29 d1 0d 09 dc d2 99 dc 06 88 5.Za..)......... 00:22:29.478 000000e0 4f 5c 93 a4 8a 56 23 af c2 05 43 5b 8c aa e6 08 O\...V#...C[.... 00:22:29.478 000000f0 9d 81 b7 83 83 83 6c aa 1a c3 c3 d2 e7 1c 51 e4 ......l.......Q. 00:22:29.478 00000100 35 35 be 92 b3 48 f4 b3 56 bd 4f 7a ed 17 8e ef 55...H..V.Oz.... 00:22:29.478 00000110 ca 67 fb ed 3d 53 cc 8b f7 11 de 3a f9 94 f9 e2 .g..=S.....:.... 00:22:29.478 00000120 9c a4 44 58 75 47 71 ce 15 ec 31 4e 08 c1 f3 bc ..DXuGq...1N.... 00:22:29.478 00000130 01 32 e3 85 f2 45 03 9f eb e5 73 91 e6 e3 4d 82 .2...E....s...M. 00:22:29.478 00000140 41 82 10 40 ed 31 3f 97 be 87 57 bb 2c e1 22 12 A..@.1?...W.,.". 00:22:29.478 00000150 f7 1e d8 1b ce 19 c5 77 7b 6e 59 86 89 3c ef e2 .......w{nY..<.. 00:22:29.478 00000160 26 d5 82 2e a5 b0 b3 ea 73 c7 a9 ca ad 90 7b 78 &.......s.....{x 00:22:29.478 00000170 8e 40 0d f9 fa 06 b9 a4 02 91 20 75 4d 71 34 96 .@........ uMq4. 00:22:29.478 00000180 66 09 1e 02 26 f2 1f 7b 42 68 f9 75 eb 17 b8 ba f...&..{Bh.u.... 00:22:29.478 00000190 59 bb fa e2 f4 18 73 67 0f 64 83 41 bd a5 b6 aa Y.....sg.d.A.... 00:22:29.478 000001a0 89 ef 10 cf 81 84 71 6e dd ac b5 9b 60 76 fb f8 ......qn....`v.. 00:22:29.478 000001b0 13 16 da f2 e5 03 de e8 bf ff 71 34 7a a0 23 26 ..........q4z.#& 00:22:29.478 000001c0 7b d9 5b 3a 27 25 8d 51 b6 63 6e 38 17 99 ec 85 {.[:'%.Q.cn8.... 00:22:29.478 000001d0 71 8d 51 ca 50 d3 d0 1f dc 5a 1c 23 b6 ed e4 ef q.Q.P....Z.#.... 00:22:29.478 000001e0 58 35 8e 84 e2 fd 81 e6 d8 29 f9 d2 94 eb 24 20 X5.......)....$ 00:22:29.478 000001f0 10 c3 2e 01 75 39 9e 2f 24 0b 0a b7 a9 3a ab 02 ....u9./$....:.. 00:22:29.478 00000200 f3 e0 81 b4 2e 96 12 2e f9 a7 6b 4a 96 cc b0 fb ..........kJ.... 00:22:29.478 00000210 79 5a dc 19 80 d7 e5 9a 0f 60 82 03 ad af 17 8b yZ.......`...... 00:22:29.478 00000220 db 3b 58 65 78 4c d9 30 7c a1 7a 3c 6b d0 d8 b7 .;XexL.0|.z 00:22:29.478 00000240 94 16 b5 f7 82 6a cb d8 92 c3 fd da 16 f7 f4 22 .....j........." 00:22:29.478 00000250 17 8c 1c 4b 02 6f 2d 7e 8a 95 4a 4d 00 c8 43 5e ...K.o-~..JM..C^ 00:22:29.478 00000260 55 98 3c 42 32 9e 7f 7a 10 c0 1c 5d a9 be 44 a0 U. 00:22:29.479 000003d0 b3 78 c3 85 08 21 7c 61 b1 ca 34 74 0c de 5a 59 .x...!|a..4t..ZY 00:22:29.479 000003e0 a8 ea 6a c8 4c 12 78 d9 9f 67 42 b4 e9 3f 66 ab ..j.L.x..gB..?f. 00:22:29.479 000003f0 a7 e5 04 fe ce 5b 69 c2 87 a7 3b 34 ff 71 07 12 .....[i...;4.q.. 00:22:29.479 dh secret: 00:22:29.479 00000000 67 54 4d 91 a3 20 57 b4 64 fc 1b e1 24 85 63 d4 gTM.. W.d...$.c. 00:22:29.479 00000010 99 3e 76 dc 5b 74 82 b0 4f 5e 9b aa e2 c7 4a aa .>v.[t..O^....J. 00:22:29.479 00000020 1b 54 70 5a 6a b7 e4 e6 ef 0f 04 d8 fb b7 fb 2c .TpZj.........., 00:22:29.479 00000030 1d 3a 37 2d 4f 3b 07 df ce 14 d5 63 4c fd ca 4b .:7-O;.....cL..K 00:22:29.479 00000040 a8 f1 15 80 5f 09 12 ae 9d bf 34 ad 45 35 6c 02 ...._.....4.E5l. 00:22:29.479 00000050 4c 0b 43 53 69 70 43 12 4f 19 c8 00 89 db 1d b2 L.CSipC.O....... 00:22:29.479 00000060 fc d6 d9 a5 95 f5 ba 56 68 ae 59 c4 e7 6e fa bb .......Vh.Y..n.. 00:22:29.479 00000070 1c 51 6d 44 8d d4 86 59 5d 5e fd 70 e8 e8 b4 5f .QmD...Y]^.p..._ 00:22:29.479 00000080 09 fd 15 ee ce 6f 16 49 ad ed a3 53 b4 48 2e 61 .....o.I...S.H.a 00:22:29.479 00000090 67 94 4f 47 8d f9 10 69 18 2d 1a 30 97 46 07 dd g.OG...i.-.0.F.. 00:22:29.479 000000a0 cd 20 bc 29 2b c4 95 da 63 0a 34 a3 3a e0 7f dd . .)+...c.4.:... 00:22:29.479 000000b0 da b4 77 0d 17 8d 47 eb 37 d4 37 98 95 32 4c 39 ..w...G.7.7..2L9 00:22:29.479 000000c0 91 f6 ca 85 65 e6 ed 83 2c 0d 4d b4 69 41 2c 7b ....e...,.M.iA,{ 00:22:29.479 000000d0 38 90 56 53 b4 3d 47 b9 e9 6b 59 b6 c9 1c 51 63 8.VS.=G..kY...Qc 00:22:29.479 000000e0 d5 42 18 3a 5d 62 67 20 c6 c7 1b 6e 5b 92 c2 7f .B.:]bg ...n[... 00:22:29.479 000000f0 a8 9f c3 33 cc 90 05 41 7f 3b 06 e5 b0 1d 20 ea ...3...A.;.... . 00:22:29.479 00000100 18 a7 02 a7 85 67 09 c2 44 5e ef 68 c5 43 fb a4 .....g..D^.h.C.. 00:22:29.479 00000110 e9 2c a3 dc af da 94 9f 3c c1 6d c0 77 f8 82 f3 .,......<.m.w... 00:22:29.479 00000120 21 00 94 9a 43 75 29 8a b4 27 e5 69 3e 9c 89 36 !...Cu)..'.i>..6 00:22:29.479 00000130 19 aa e1 9e 26 5f 64 a6 f9 ae 13 13 4c 06 38 29 ....&_d.....L.8) 00:22:29.479 00000140 e7 7b c2 00 71 da 62 37 f2 4f d0 83 1c eb 5c fa .{..q.b7.O....\. 00:22:29.479 00000150 dc 57 d7 90 11 a4 bc 30 c5 81 a5 29 7a 73 b2 10 .W.....0...)zs.. 00:22:29.479 00000160 07 c3 9a cd 81 15 63 13 88 8e 6e 5f ad 70 48 c9 ......c...n_.pH. 00:22:29.479 00000170 ad 89 ea 43 5b c7 3e 21 1f f2 e8 a9 d5 86 b7 9e ...C[.>!........ 00:22:29.479 00000180 0e aa 2b 60 53 7f f3 bd b0 b8 59 66 dc e1 e1 14 ..+`S.....Yf.... 00:22:29.479 00000190 fe 26 0f f4 0a 88 f3 43 64 50 ec d3 ad c4 93 3e .&.....CdP.....> 00:22:29.479 000001a0 07 86 f5 5d 40 5f 11 f7 ac 2c 0e e9 2a fd 66 2a ...]@_...,..*.f* 00:22:29.479 000001b0 a4 31 59 05 c6 d7 dc 55 43 81 dc f2 e7 11 e5 d2 .1Y....UC....... 00:22:29.479 000001c0 01 42 bf 46 8f df b5 2f df dd 76 f7 bc 1a cb 77 .B.F.../..v....w 00:22:29.479 000001d0 ff 1f 16 52 0c e0 48 d2 d7 ff 6c 87 e7 f8 bc 6a ...R..H...l....j 00:22:29.479 000001e0 d3 63 ce d2 89 45 6d 76 ab 02 78 9c ca 84 83 00 .c...Emv..x..... 00:22:29.479 000001f0 bd 41 7b 6f 42 8c 0b 17 01 a3 06 9d ac 71 23 73 .A{oB........q#s 00:22:29.479 00000200 d9 66 46 6a 3f 2c ff 58 8f 7d b1 41 27 91 c1 b5 .fFj?,.X.}.A'... 00:22:29.479 00000210 b8 37 b2 5e c1 a4 5e a0 2c 65 3a 7a d2 f0 19 56 .7.^..^.,e:z...V 00:22:29.479 00000220 85 d5 12 78 f8 0b 93 ed c1 e4 9c 8f 7d 9c b5 04 ...x........}... 00:22:29.479 00000230 f0 a6 66 da 34 7d 83 d2 9e 55 03 46 1c 6f 98 79 ..f.4}...U.F.o.y 00:22:29.479 00000240 68 77 d1 ef a0 51 c9 72 37 f7 9c b7 ab 46 5e 83 hw...Q.r7....F^. 00:22:29.479 00000250 8c 9e 4b 9b 4e 6e 33 54 ed 9b 58 30 c1 95 f9 a9 ..K.Nn3T..X0.... 00:22:29.479 00000260 2b 4e f1 f2 47 b7 ce f0 7c 7c 89 a5 fe 98 e4 8c +N..G...||...... 00:22:29.479 00000270 01 b2 e3 44 d8 27 c5 52 32 e4 d5 7d ce af 8a e9 ...D.'.R2..}.... 00:22:29.479 00000280 53 e6 48 23 36 cd 8d 04 f1 27 4e c3 cd 71 18 79 S.H#6....'N..q.y 00:22:29.479 00000290 df 0d f4 a9 60 b1 f0 ec fe d2 48 19 9d 99 88 96 ....`.....H..... 00:22:29.479 000002a0 f6 84 fd 84 86 7d d3 d2 90 16 59 89 79 af 41 88 .....}....Y.y.A. 00:22:29.479 000002b0 4b 53 90 2d a6 c1 2d d6 0c 41 35 8b 5b 13 79 68 KS.-..-..A5.[.yh 00:22:29.479 000002c0 02 d0 7d 58 99 44 58 88 86 18 e8 85 e0 35 e7 18 ..}X.DX......5.. 00:22:29.479 000002d0 68 34 c5 5b dc a1 d5 ce 55 15 65 83 f0 92 6c 58 h4.[....U.e...lX 00:22:29.479 000002e0 cf 61 12 7a 53 af 24 e0 ef 68 f4 b4 89 0e 10 6f .a.zS.$..h.....o 00:22:29.479 000002f0 61 29 8e c5 6f e3 74 73 5c 8e 3e 72 e9 b9 e5 a5 a)..o.ts\.>r.... 00:22:29.479 00000300 7b 8d d5 78 e7 06 7f 9d 76 ae e5 f1 a3 45 7d bf {..x....v....E}. 00:22:29.479 00000310 73 e8 cd 41 61 d7 64 f0 3c 8c 46 a2 0f c5 66 0c s..Aa.d.<.F...f. 00:22:29.479 00000320 6e 01 0e 5a b5 26 2f 5b 90 f6 0a 90 d6 fc 5e a8 n..Z.&/[......^. 00:22:29.479 00000330 a6 ff d9 e9 11 d7 63 fd 36 0e 0c ca 1d 32 e8 42 ......c.6....2.B 00:22:29.479 00000340 cd a5 61 69 f3 66 a5 b7 1a 02 e1 0e 09 ba d6 33 ..ai.f.........3 00:22:29.479 00000350 f1 86 86 5d 50 85 f8 3f 89 5f 2d db 20 ce d2 a9 ...]P..?._-. ... 00:22:29.479 00000360 23 ed d2 77 40 3f 61 1e cc e2 ee fd a1 a2 b8 79 #..w@?a........y 00:22:29.479 00000370 10 e5 7a 26 43 f1 0c b6 37 3e 81 b6 b4 c8 76 ba ..z&C...7>....v. 00:22:29.479 00000380 b5 2c 00 8b 2b 05 a3 e8 84 29 10 99 61 c9 3e 12 .,..+....)..a.>. 00:22:29.479 00000390 f3 c4 04 2c 45 b0 3b e7 cd 4f 6e 5c 4f fe c8 43 ...,E.;..On\O..C 00:22:29.479 000003a0 83 81 ea 93 b5 b9 c2 49 2a e8 30 e8 34 57 96 07 .......I*.0.4W.. 00:22:29.479 000003b0 bb 61 04 b2 48 7d e4 6d 6b bd 81 24 53 c8 89 f1 .a..H}.mk..$S... 00:22:29.479 000003c0 ac 27 5f fd 23 82 29 fd d9 5a ee 8d 99 22 16 58 .'_.#.)..Z...".X 00:22:29.479 000003d0 16 03 3c 03 19 c6 b2 66 01 2f ac 1d c2 48 fa a2 ..<....f./...H.. 00:22:29.479 000003e0 13 23 f8 f0 10 50 b2 c6 29 e6 b5 ec 7a aa 77 d3 .#...P..)...z.w. 00:22:29.479 000003f0 91 87 5b e9 c7 9b c3 f0 86 70 51 51 a9 cc 8c 1c ..[......pQQ.... 00:22:29.479 [2024-09-27 13:27:11.328058] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key3, hash=1, dhgroup=5, seq=3775755220, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.479 [2024-09-27 13:27:11.328457] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.479 [2024-09-27 13:27:11.412294] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.479 [2024-09-27 13:27:11.412799] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.479 [2024-09-27 13:27:11.413023] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.479 [2024-09-27 13:27:11.413258] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.479 [2024-09-27 13:27:11.562006] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.479 [2024-09-27 13:27:11.562260] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.479 [2024-09-27 13:27:11.562453] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 5 (ffdhe8192) 00:22:29.479 [2024-09-27 13:27:11.562795] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.479 [2024-09-27 13:27:11.563137] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.479 ctrlr pubkey: 00:22:29.479 00000000 2c 4f 91 b5 b7 91 e6 db 3f a4 01 18 d2 d6 e9 0e ,O......?....... 00:22:29.479 00000010 84 d4 96 69 87 4d ab 4d 60 d0 e6 e6 0f f7 97 9c ...i.M.M`....... 00:22:29.479 00000020 7f 30 38 70 ca f5 91 03 98 1b 29 15 d8 24 ce 62 .08p......)..$.b 00:22:29.479 00000030 6a cd 70 8b 5d 3b 24 ca d2 8f aa 94 56 22 38 e5 j.p.];$.....V"8. 00:22:29.479 00000040 e2 52 0a 20 c3 63 a7 2c 5e 0f 0d 17 da cf 29 34 .R. .c.,^.....)4 00:22:29.480 00000050 87 40 cb 56 97 d9 ae 52 83 ca cf 81 4a 10 73 fb .@.V...R....J.s. 00:22:29.480 00000060 55 2c 63 69 0b f1 5a 6c 84 40 b4 56 84 fe 50 b2 U,ci..Zl.@.V..P. 00:22:29.480 00000070 ac b7 0e 09 ed 07 20 b6 b4 75 49 00 90 37 32 d2 ...... ..uI..72. 00:22:29.480 00000080 1b 25 2a dc 12 16 5a a8 20 a8 e0 c7 0b 6c 87 a1 .%*...Z. ....l.. 00:22:29.480 00000090 b0 e9 4d 9c 15 a9 3b fb 0b f3 88 37 6f 3c d7 c6 ..M...;....7o<.. 00:22:29.480 000000a0 dd f6 38 0b a6 1e 5d 46 80 5b 10 6c e1 54 2d 22 ..8...]F.[.l.T-" 00:22:29.480 000000b0 c9 66 3e 4f 9c 33 3e 56 11 ca 21 d9 a2 82 b9 d2 .f>O.3>V..!..... 00:22:29.480 000000c0 88 11 2d 9a 9e 19 14 c1 89 35 05 93 7c e9 04 d4 ..-......5..|... 00:22:29.480 000000d0 a8 53 b1 bb 32 94 c5 52 2b 8c d3 b4 0e d6 06 53 .S..2..R+......S 00:22:29.480 000000e0 1c 81 1b a6 99 60 be 86 4f 05 17 3d f2 e2 34 b4 .....`..O..=..4. 00:22:29.480 000000f0 93 7f f1 23 a2 d9 2d 25 25 88 e2 e5 dc 6f b6 b0 ...#..-%%....o.. 00:22:29.480 00000100 06 0e 6d c5 aa bf 58 41 76 cc 0e 54 97 23 1b 98 ..m...XAv..T.#.. 00:22:29.480 00000110 b2 e3 05 a7 d3 b4 b4 17 f0 51 41 70 89 4c fa d1 .........QAp.L.. 00:22:29.480 00000120 a3 2e 32 db da ee 36 b0 e7 d6 66 0b 26 12 2a 0f ..2...6...f.&.*. 00:22:29.480 00000130 35 40 65 f6 71 7d ab f7 68 c4 6f f0 65 e0 f7 4f 5@e.q}..h.o.e..O 00:22:29.480 00000140 17 aa 37 f9 68 5c 9e 03 41 f7 3c 78 35 13 8f 72 ..7.h\..A..!...w... 00:22:29.480 00000170 39 0a 24 92 86 ce fa 90 ab 49 19 67 5d 21 5e 02 9.$......I.g]!^. 00:22:29.480 00000180 ee 9f 4d 78 c4 26 3e d2 66 bb 32 a6 58 c1 78 5e ..Mx.&>.f.2.X.x^ 00:22:29.480 00000190 45 ba ab cc 85 c9 08 6f 6a 5f 64 93 3e 04 43 9b E......oj_d.>.C. 00:22:29.480 000001a0 0e db 46 a5 41 f3 6a c4 1a 7e aa 15 e2 f8 b2 ad ..F.A.j..~...... 00:22:29.480 000001b0 4d d4 1e cd f1 01 24 8e 7b b8 df b1 2d 8f 8d f2 M.....$.{...-... 00:22:29.480 000001c0 ea c4 18 a1 9d eb e7 0d ef 64 ea 9b 52 4c 25 62 .........d..RL%b 00:22:29.480 000001d0 54 ab 03 e1 71 b9 06 19 a0 cc d2 78 fe 0e 64 4c T...q......x..dL 00:22:29.480 000001e0 79 14 92 4f d0 ad ab e5 cf e7 98 71 1a 1d 91 3f y..O.......q...? 00:22:29.480 000001f0 13 55 d6 b6 82 8e 45 0c 87 3f 7c 5a b3 5c 17 46 .U....E..?|Z.\.F 00:22:29.480 00000200 b8 53 69 b4 7d 98 69 2e 39 29 be 41 46 62 a6 05 .Si.}.i.9).AFb.. 00:22:29.480 00000210 02 99 fd 13 fe d5 2c bd e3 f4 37 ad bf 46 1d d8 ......,...7..F.. 00:22:29.480 00000220 a0 39 38 c7 91 8c d7 80 71 4a 96 8d df 27 7e 43 .98.....qJ...'~C 00:22:29.480 00000230 3b 75 ce 72 e8 ea ef 48 a1 19 9c 8b 75 24 80 92 ;u.r...H....u$.. 00:22:29.480 00000240 68 f9 86 7f d0 a9 07 bf 0b ca a6 30 ad 29 1a 75 h..........0.).u 00:22:29.480 00000250 a7 6f 44 05 73 df 2a 7e 94 3a 46 26 39 5c 66 94 .oD.s.*~.:F&9\f. 00:22:29.480 00000260 4e de a0 d1 2d 44 7c 66 53 23 be 3c fd 20 79 c6 N...-D|fS#.<. y. 00:22:29.480 00000270 bf c9 48 65 a4 02 35 56 92 36 2f 43 20 89 47 e0 ..He..5V.6/C .G. 00:22:29.480 00000280 25 52 0b a1 9a c2 ca 2b 36 7e 25 d1 34 bb 33 0f %R.....+6~%.4.3. 00:22:29.480 00000290 c2 24 ab 97 5a 69 83 09 c9 6d c7 29 2a 9a 4b e8 .$..Zi...m.)*.K. 00:22:29.480 000002a0 ca ea 90 f3 7d f4 2c 18 e1 95 78 d6 4b e0 d0 65 ....}.,...x.K..e 00:22:29.480 000002b0 b9 ce 82 1f 14 e0 bb 06 74 47 36 7e 6a 07 dc b5 ........tG6~j... 00:22:29.480 000002c0 b9 1a 07 e6 c1 fa 73 40 a6 08 e6 cd 3f 53 94 91 ......s@....?S.. 00:22:29.480 000002d0 21 ce 6a a5 b0 12 2a ef f2 74 a7 5c d0 84 18 83 !.j...*..t.\.... 00:22:29.480 000002e0 2a c7 d9 19 a3 88 8a 19 8e 01 a0 dc a3 43 84 95 *............C.. 00:22:29.480 000002f0 23 1d 71 c9 2c 6b 9a 21 a0 cd 22 4a 75 32 96 86 #.q.,k.!.."Ju2.. 00:22:29.480 00000300 8b 8d 57 f4 fb 68 d8 65 f7 22 42 44 c7 6c e9 7c ..W..h.e."BD.l.| 00:22:29.480 00000310 9a 52 9f 2a ce 0f 24 ef 52 4f fe 6b 1c 3b 30 f2 .R.*..$.RO.k.;0. 00:22:29.480 00000320 e8 d3 8f 06 10 3b a4 b0 63 a3 f1 3e 4d 54 7a f8 .....;..c..>MTz. 00:22:29.480 00000330 d8 62 83 99 e2 18 9a 6b e6 c6 ea 1d a2 3f e3 44 .b.....k.....?.D 00:22:29.480 00000340 9b 85 6a 3a 6b a8 db 30 b7 d5 38 ef a3 40 bc 1c ..j:k..0..8..@.. 00:22:29.480 00000350 be ae 9e 32 5e 00 6a 15 23 07 5d 33 93 20 23 b5 ...2^.j.#.]3. #. 00:22:29.480 00000360 04 40 ac 79 68 cb 8b 15 c9 83 6d 7c c4 6e 7f 75 .@.yh.....m|.n.u 00:22:29.480 00000370 3e c6 37 fd e3 72 37 ea 9d 83 04 5b 2d 0c 7a b3 >.7..r7....[-.z. 00:22:29.480 00000380 4c 00 18 ad d4 43 4f 02 f2 45 d2 2b 35 78 c7 81 L....CO..E.+5x.. 00:22:29.480 00000390 25 cc 1e 02 5d 7e e6 0f df 0f 67 79 2f dc d8 3a %...]~....gy/..: 00:22:29.480 000003a0 8d 35 de 95 4d a8 e5 00 96 a7 0b ef d3 14 81 e1 .5..M........... 00:22:29.480 000003b0 c4 b3 c3 ef e2 12 2b bb 29 e0 b5 5b e0 c2 b9 e7 ......+.)..[.... 00:22:29.480 000003c0 3f 92 37 87 6f 76 a1 8e 6d f3 18 b6 07 c1 80 a0 ?.7.ov..m....... 00:22:29.480 000003d0 50 09 78 6f ee c3 ac 0c 20 16 22 91 37 3a 67 d6 P.xo.... .".7:g. 00:22:29.480 000003e0 94 3f 88 80 6d 6b 93 d6 2b 42 a2 68 b8 a8 3e cc .?..mk..+B.h..>. 00:22:29.480 000003f0 ce cf f7 25 64 02 05 88 10 a1 80 23 01 02 e0 48 ...%d......#...H 00:22:29.480 host pubkey: 00:22:29.480 00000000 e8 10 ac 02 b6 63 23 6f 39 55 52 81 98 68 43 30 .....c#o9UR..hC0 00:22:29.480 00000010 e6 d0 fc 74 c1 c7 4e 7f 38 ac 48 f9 9d 20 33 48 ...t..N.8.H.. 3H 00:22:29.480 00000020 37 db 85 70 98 21 bb 6a 71 f9 41 87 ea 4d 83 8c 7..p.!.jq.A..M.. 00:22:29.480 00000030 53 fa 3f 8d c3 09 7b 9d 6a bb 50 56 2e e8 7f 6f S.?...{.j.PV...o 00:22:29.480 00000040 22 c3 52 03 49 4e 2b 21 68 ac 3a 01 42 47 25 1d ".R.IN+!h.:.BG%. 00:22:29.480 00000050 01 2e 97 22 88 3e 51 fc d1 62 dd 34 1c f6 bc b4 ...".>Q..b.4.... 00:22:29.480 00000060 74 09 18 e0 85 28 8e 60 51 81 3c 94 02 29 82 3b t....(.`Q.<..).; 00:22:29.480 00000070 68 29 3d c2 d7 21 7d e7 4d 72 55 53 d1 26 62 1c h)=..!}.MrUS.&b. 00:22:29.480 00000080 f5 2b 74 8e f5 2d e2 b7 86 5c 63 74 19 3d 3b cc .+t..-...\ct.=;. 00:22:29.480 00000090 da fa fe e6 fe df cb d2 7b 09 9d 99 7c e3 89 57 ........{...|..W 00:22:29.480 000000a0 8b 94 2d 90 d2 e4 c7 07 f0 7b c0 94 1d 4c 0e 49 ..-......{...L.I 00:22:29.480 000000b0 38 80 0c ac 3b a2 43 a5 65 d8 99 de 3e 58 1e f4 8...;.C.e...>X.. 00:22:29.480 000000c0 6d e1 59 98 59 a1 79 61 f7 ea 13 4f 31 d4 60 d2 m.Y.Y.ya...O1.`. 00:22:29.480 000000d0 37 8a 7c ab f1 81 c2 ec 4d 78 73 61 fc 09 0a c2 7.|.....Mxsa.... 00:22:29.480 000000e0 53 0c 69 2c 71 0b a7 a1 39 c9 94 8c 4f 68 72 d3 S.i,q...9...Ohr. 00:22:29.480 000000f0 51 c5 1c 8b fc ee b9 3a 2e 63 47 22 46 41 24 e8 Q......:.cG"FA$. 00:22:29.480 00000100 6b fe e0 8c 73 35 65 c2 cd e4 33 ed 7c fd d6 da k...s5e...3.|... 00:22:29.480 00000110 0c a0 dc 83 d8 68 cf 5a e5 bd ce 68 45 16 57 41 .....h.Z...hE.WA 00:22:29.480 00000120 0c 69 f2 d7 92 91 b9 39 dc 9f 49 e4 05 2a 78 53 .i.....9..I..*xS 00:22:29.480 00000130 1e ce 09 4f 46 8a 2c a8 94 fc c1 1f 68 4d ca 59 ...OF.,.....hM.Y 00:22:29.480 00000140 47 1b 8c b0 40 3b fc 63 b7 f1 bd dc c8 90 bc c1 G...@;.c........ 00:22:29.480 00000150 11 48 d9 2e fb 11 ce 9e aa 73 e2 9a fd d4 98 89 .H.......s...... 00:22:29.480 00000160 5c 80 d9 97 dd 9d 57 e8 2f 4a ca 72 a5 33 c0 86 \.....W./J.r.3.. 00:22:29.480 00000170 8a b0 94 8c 81 b9 60 52 6d 50 a8 9e 06 b8 e9 51 ......`RmP.....Q 00:22:29.480 00000180 0d cc c8 da bc b9 c7 34 66 0d 90 d4 d2 9f 45 a1 .......4f.....E. 00:22:29.480 00000190 ee c0 74 3d 9f b0 82 4c d6 36 3d 37 88 ae 55 8f ..t=...L.6=7..U. 00:22:29.480 000001a0 dd 16 dc 1e 7f c4 b5 f4 5d cc 0f 74 1a 8d ce d4 ........]..t.... 00:22:29.480 000001b0 24 b4 c0 f5 a6 f1 d4 e8 ce 97 da 4b 1f 68 50 1f $..........K.hP. 00:22:29.480 000001c0 61 71 93 84 0c 52 5c 97 17 c8 1d 2a e0 13 af 28 aq...R\....*...( 00:22:29.480 000001d0 78 c0 c4 cf ec 28 fc 14 5b 13 e1 92 0e b0 bc 30 x....(..[......0 00:22:29.480 000001e0 b9 80 f9 62 65 b7 94 50 2d 5e 96 43 93 b0 c3 05 ...be..P-^.C.... 00:22:29.480 000001f0 2a 34 27 7d df 6f 08 01 c3 cc b0 b2 79 a3 39 ed *4'}.o......y.9. 00:22:29.480 00000200 1f d7 52 cf 7c ac fe de e4 4c 98 49 34 b2 2d 65 ..R.|....L.I4.-e 00:22:29.480 00000210 e7 f5 cc 3a 37 d5 50 61 b5 75 2f 2f 23 a5 96 9a ...:7.Pa.u//#... 00:22:29.480 00000220 d6 d5 ca d5 69 9b 18 0a b8 c0 1b 1a 77 a4 1b 6a ....i.......w..j 00:22:29.480 00000230 97 42 d7 24 6d a4 28 b5 ed 9e 38 db 74 f6 35 ea .B.$m.(...8.t.5. 00:22:29.480 00000240 42 71 fa 9b 21 5b 3f 4f 22 85 83 60 6e 96 0b b3 Bq..![?O"..`n... 00:22:29.480 00000250 89 8f 06 ab 59 a4 75 fb ab d5 f8 5f 4a 9b 6e ea ....Y.u...._J.n. 00:22:29.480 00000260 53 21 d1 a9 39 e7 0b e8 8d f9 a8 7b cf 13 4a 61 S!..9......{..Ja 00:22:29.480 00000270 1f 09 44 44 bb 8c 31 51 c9 96 b7 c5 63 43 1b b7 ..DD..1Q....cC.. 00:22:29.480 00000280 d9 40 a4 87 c2 db fa 03 a1 d8 a1 ea d5 77 2a bb .@...........w*. 00:22:29.480 00000290 f6 67 37 99 5f 0a 7b 2f 1c 0c 04 5a e5 03 a4 43 .g7._.{/...Z...C 00:22:29.480 000002a0 0b c2 14 08 7a 77 bd 9d 72 4c 0f 8d 80 04 d2 1d ....zw..rL...... 00:22:29.480 000002b0 9f 13 64 18 9f d9 0d 45 a4 9c 43 a0 eb ac fa 45 ..d....E..C....E 00:22:29.480 000002c0 7e 60 df 53 d2 bb 9c 0b 87 14 eb d1 6a 8f be 59 ~`.S........j..Y 00:22:29.480 000002d0 2d f6 b6 06 04 a9 8e 21 36 83 94 48 5e 18 fb b0 -......!6..H^... 00:22:29.480 000002e0 45 02 11 35 be 83 13 04 2b f3 0b 00 34 ab db 57 E..5....+...4..W 00:22:29.480 000002f0 1e 83 ff 67 34 cd da a5 b0 a8 16 9b 85 a4 55 80 ...g4.........U. 00:22:29.480 00000300 7d 3e b0 5a f9 5b bd 13 14 21 84 90 cd 22 35 28 }>.Z.[...!..."5( 00:22:29.480 00000310 0b fa 6d ca a0 29 bc ae 13 4d 1e e7 61 77 c6 77 ..m..)...M..aw.w 00:22:29.480 00000320 b6 19 21 3d 73 4e aa 10 95 85 70 6c fd c5 a9 b5 ..!=sN....pl.... 00:22:29.480 00000330 31 a1 3a 93 7b 54 ad ba 5f 41 ba 6a 56 47 05 11 1.:.{T.._A.jVG.. 00:22:29.480 00000340 5c 6a ea 19 9d 1a f4 1e dd 78 27 ed 55 5c 28 d6 \j.......x'.U\(. 00:22:29.480 00000350 84 fe 16 1f 6b e3 68 4f 42 8e 2a 8e ed a4 e5 e7 ....k.hOB.*..... 00:22:29.480 00000360 b7 9e 17 51 44 92 d5 45 4d 63 7c 17 0e fe d8 66 ...QD..EMc|....f 00:22:29.480 00000370 1d 86 04 ae 4b 78 9d 34 2d 88 55 ea d0 68 ca 89 ....Kx.4-.U..h.. 00:22:29.480 00000380 c4 23 df b9 41 11 3b 69 88 29 07 64 73 6f bd f6 .#..A.;i.).dso.. 00:22:29.480 00000390 61 a4 50 12 69 37 e6 9f b6 6f 03 e8 dd ea 7d 22 a.P.i7...o....}" 00:22:29.480 000003a0 7c 94 d7 3b f6 52 25 d3 47 20 e4 0f 02 20 c6 a0 |..;.R%.G ... .. 00:22:29.480 000003b0 85 f4 3f f3 c5 3f dd a6 a8 d1 12 c3 ca cc 0d e5 ..?..?.......... 00:22:29.480 000003c0 12 36 b6 3a b5 82 8f 72 25 0a 5a 23 9c 22 c9 6c .6.:...r%.Z#.".l 00:22:29.480 000003d0 30 52 71 a0 47 5e ae a3 34 db 27 a6 67 fe b0 02 0Rq.G^..4.'.g... 00:22:29.480 000003e0 11 e0 a2 50 08 ac a4 22 ca 55 bc 47 61 e7 da 26 ...P...".U.Ga..& 00:22:29.480 000003f0 2d 08 5b be c0 45 93 80 05 62 04 a9 9a 2a 60 e1 -.[..E...b...*`. 00:22:29.480 dh secret: 00:22:29.480 00000000 30 85 52 83 26 f1 74 c4 77 4b 61 c1 c1 28 78 6c 0.R.&.t.wKa..(xl 00:22:29.480 00000010 2a a5 a3 fd 1e 4e 09 2a 83 9c 07 df 19 48 34 3f *....N.*.....H4? 00:22:29.480 00000020 3f bc f2 90 5b 9a 29 39 c2 9c 4b ce 7f 85 ac 47 ?...[.)9..K....G 00:22:29.480 00000030 4c 52 7f 9a 6e cf 61 8f 43 19 7f b9 15 7c 99 a0 LR..n.a.C....|.. 00:22:29.480 00000040 a5 e8 01 b4 82 87 27 b7 13 1e b8 1c d6 46 f2 36 ......'......F.6 00:22:29.480 00000050 01 d7 9d 77 38 e2 51 9f 9a 98 80 fd be 3d 8c c0 ...w8.Q......=.. 00:22:29.480 00000060 2f 6d f1 5c be 4b c5 e6 9f e9 9e 06 22 d1 92 e5 /m.\.K......"... 00:22:29.480 00000070 8a e1 84 c9 1b 34 a6 09 d3 68 a2 ce 3b 16 05 26 .....4...h..;..& 00:22:29.480 00000080 c5 c2 a3 28 9f 4a f5 9a 71 d8 c7 78 93 15 0c 73 ...(.J..q..x...s 00:22:29.480 00000090 47 a2 e3 e4 b7 c4 94 61 b6 4c 09 93 7f 51 71 58 G......a.L...QqX 00:22:29.480 000000a0 fe b1 8d e5 78 c8 07 28 73 52 3a 9e 0d e3 83 08 ....x..(sR:..... 00:22:29.480 000000b0 51 93 c6 4d 10 3c 77 c2 00 0e 96 5e f4 d3 7e 88 Q..M.l=.4.(..M!t. 00:22:29.481 000003d0 27 8d 39 c8 02 70 01 32 e5 09 f1 52 c9 7b 98 b9 '.9..p.2...R.{.. 00:22:29.481 000003e0 97 a1 f0 62 a7 60 68 b8 51 4a 8e 63 e6 ed 9a c1 ...b.`h.QJ.c.... 00:22:29.481 000003f0 2a 6f 9e 03 a7 3c c2 66 59 93 32 e5 44 50 0f 6d *o...<.fY.2.DP.m 00:22:29.481 [2024-09-27 13:27:11.718413] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key4, hash=1, dhgroup=5, seq=3775755221, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.481 [2024-09-27 13:27:11.718720] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.481 [2024-09-27 13:27:11.804897] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.481 [2024-09-27 13:27:11.805217] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.481 [2024-09-27 13:27:11.805548] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.481 [2024-09-27 13:27:11.856328] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.481 [2024-09-27 13:27:11.856456] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 1 (sha256) 00:22:29.481 [2024-09-27 13:27:11.856741] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 5 (ffdhe8192) 00:22:29.481 [2024-09-27 13:27:11.856881] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.481 [2024-09-27 13:27:11.857323] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.481 ctrlr pubkey: 00:22:29.481 00000000 2c 4f 91 b5 b7 91 e6 db 3f a4 01 18 d2 d6 e9 0e ,O......?....... 00:22:29.481 00000010 84 d4 96 69 87 4d ab 4d 60 d0 e6 e6 0f f7 97 9c ...i.M.M`....... 00:22:29.481 00000020 7f 30 38 70 ca f5 91 03 98 1b 29 15 d8 24 ce 62 .08p......)..$.b 00:22:29.481 00000030 6a cd 70 8b 5d 3b 24 ca d2 8f aa 94 56 22 38 e5 j.p.];$.....V"8. 00:22:29.481 00000040 e2 52 0a 20 c3 63 a7 2c 5e 0f 0d 17 da cf 29 34 .R. .c.,^.....)4 00:22:29.481 00000050 87 40 cb 56 97 d9 ae 52 83 ca cf 81 4a 10 73 fb .@.V...R....J.s. 00:22:29.481 00000060 55 2c 63 69 0b f1 5a 6c 84 40 b4 56 84 fe 50 b2 U,ci..Zl.@.V..P. 00:22:29.481 00000070 ac b7 0e 09 ed 07 20 b6 b4 75 49 00 90 37 32 d2 ...... ..uI..72. 00:22:29.481 00000080 1b 25 2a dc 12 16 5a a8 20 a8 e0 c7 0b 6c 87 a1 .%*...Z. ....l.. 00:22:29.481 00000090 b0 e9 4d 9c 15 a9 3b fb 0b f3 88 37 6f 3c d7 c6 ..M...;....7o<.. 00:22:29.481 000000a0 dd f6 38 0b a6 1e 5d 46 80 5b 10 6c e1 54 2d 22 ..8...]F.[.l.T-" 00:22:29.481 000000b0 c9 66 3e 4f 9c 33 3e 56 11 ca 21 d9 a2 82 b9 d2 .f>O.3>V..!..... 00:22:29.481 000000c0 88 11 2d 9a 9e 19 14 c1 89 35 05 93 7c e9 04 d4 ..-......5..|... 00:22:29.481 000000d0 a8 53 b1 bb 32 94 c5 52 2b 8c d3 b4 0e d6 06 53 .S..2..R+......S 00:22:29.481 000000e0 1c 81 1b a6 99 60 be 86 4f 05 17 3d f2 e2 34 b4 .....`..O..=..4. 00:22:29.481 000000f0 93 7f f1 23 a2 d9 2d 25 25 88 e2 e5 dc 6f b6 b0 ...#..-%%....o.. 00:22:29.481 00000100 06 0e 6d c5 aa bf 58 41 76 cc 0e 54 97 23 1b 98 ..m...XAv..T.#.. 00:22:29.481 00000110 b2 e3 05 a7 d3 b4 b4 17 f0 51 41 70 89 4c fa d1 .........QAp.L.. 00:22:29.481 00000120 a3 2e 32 db da ee 36 b0 e7 d6 66 0b 26 12 2a 0f ..2...6...f.&.*. 00:22:29.481 00000130 35 40 65 f6 71 7d ab f7 68 c4 6f f0 65 e0 f7 4f 5@e.q}..h.o.e..O 00:22:29.481 00000140 17 aa 37 f9 68 5c 9e 03 41 f7 3c 78 35 13 8f 72 ..7.h\..A..!...w... 00:22:29.481 00000170 39 0a 24 92 86 ce fa 90 ab 49 19 67 5d 21 5e 02 9.$......I.g]!^. 00:22:29.481 00000180 ee 9f 4d 78 c4 26 3e d2 66 bb 32 a6 58 c1 78 5e ..Mx.&>.f.2.X.x^ 00:22:29.481 00000190 45 ba ab cc 85 c9 08 6f 6a 5f 64 93 3e 04 43 9b E......oj_d.>.C. 00:22:29.481 000001a0 0e db 46 a5 41 f3 6a c4 1a 7e aa 15 e2 f8 b2 ad ..F.A.j..~...... 00:22:29.481 000001b0 4d d4 1e cd f1 01 24 8e 7b b8 df b1 2d 8f 8d f2 M.....$.{...-... 00:22:29.481 000001c0 ea c4 18 a1 9d eb e7 0d ef 64 ea 9b 52 4c 25 62 .........d..RL%b 00:22:29.481 000001d0 54 ab 03 e1 71 b9 06 19 a0 cc d2 78 fe 0e 64 4c T...q......x..dL 00:22:29.481 000001e0 79 14 92 4f d0 ad ab e5 cf e7 98 71 1a 1d 91 3f y..O.......q...? 00:22:29.481 000001f0 13 55 d6 b6 82 8e 45 0c 87 3f 7c 5a b3 5c 17 46 .U....E..?|Z.\.F 00:22:29.481 00000200 b8 53 69 b4 7d 98 69 2e 39 29 be 41 46 62 a6 05 .Si.}.i.9).AFb.. 00:22:29.481 00000210 02 99 fd 13 fe d5 2c bd e3 f4 37 ad bf 46 1d d8 ......,...7..F.. 00:22:29.481 00000220 a0 39 38 c7 91 8c d7 80 71 4a 96 8d df 27 7e 43 .98.....qJ...'~C 00:22:29.481 00000230 3b 75 ce 72 e8 ea ef 48 a1 19 9c 8b 75 24 80 92 ;u.r...H....u$.. 00:22:29.481 00000240 68 f9 86 7f d0 a9 07 bf 0b ca a6 30 ad 29 1a 75 h..........0.).u 00:22:29.481 00000250 a7 6f 44 05 73 df 2a 7e 94 3a 46 26 39 5c 66 94 .oD.s.*~.:F&9\f. 00:22:29.481 00000260 4e de a0 d1 2d 44 7c 66 53 23 be 3c fd 20 79 c6 N...-D|fS#.<. y. 00:22:29.481 00000270 bf c9 48 65 a4 02 35 56 92 36 2f 43 20 89 47 e0 ..He..5V.6/C .G. 00:22:29.481 00000280 25 52 0b a1 9a c2 ca 2b 36 7e 25 d1 34 bb 33 0f %R.....+6~%.4.3. 00:22:29.481 00000290 c2 24 ab 97 5a 69 83 09 c9 6d c7 29 2a 9a 4b e8 .$..Zi...m.)*.K. 00:22:29.481 000002a0 ca ea 90 f3 7d f4 2c 18 e1 95 78 d6 4b e0 d0 65 ....}.,...x.K..e 00:22:29.481 000002b0 b9 ce 82 1f 14 e0 bb 06 74 47 36 7e 6a 07 dc b5 ........tG6~j... 00:22:29.481 000002c0 b9 1a 07 e6 c1 fa 73 40 a6 08 e6 cd 3f 53 94 91 ......s@....?S.. 00:22:29.481 000002d0 21 ce 6a a5 b0 12 2a ef f2 74 a7 5c d0 84 18 83 !.j...*..t.\.... 00:22:29.481 000002e0 2a c7 d9 19 a3 88 8a 19 8e 01 a0 dc a3 43 84 95 *............C.. 00:22:29.481 000002f0 23 1d 71 c9 2c 6b 9a 21 a0 cd 22 4a 75 32 96 86 #.q.,k.!.."Ju2.. 00:22:29.481 00000300 8b 8d 57 f4 fb 68 d8 65 f7 22 42 44 c7 6c e9 7c ..W..h.e."BD.l.| 00:22:29.481 00000310 9a 52 9f 2a ce 0f 24 ef 52 4f fe 6b 1c 3b 30 f2 .R.*..$.RO.k.;0. 00:22:29.481 00000320 e8 d3 8f 06 10 3b a4 b0 63 a3 f1 3e 4d 54 7a f8 .....;..c..>MTz. 00:22:29.481 00000330 d8 62 83 99 e2 18 9a 6b e6 c6 ea 1d a2 3f e3 44 .b.....k.....?.D 00:22:29.481 00000340 9b 85 6a 3a 6b a8 db 30 b7 d5 38 ef a3 40 bc 1c ..j:k..0..8..@.. 00:22:29.481 00000350 be ae 9e 32 5e 00 6a 15 23 07 5d 33 93 20 23 b5 ...2^.j.#.]3. #. 00:22:29.481 00000360 04 40 ac 79 68 cb 8b 15 c9 83 6d 7c c4 6e 7f 75 .@.yh.....m|.n.u 00:22:29.481 00000370 3e c6 37 fd e3 72 37 ea 9d 83 04 5b 2d 0c 7a b3 >.7..r7....[-.z. 00:22:29.481 00000380 4c 00 18 ad d4 43 4f 02 f2 45 d2 2b 35 78 c7 81 L....CO..E.+5x.. 00:22:29.481 00000390 25 cc 1e 02 5d 7e e6 0f df 0f 67 79 2f dc d8 3a %...]~....gy/..: 00:22:29.481 000003a0 8d 35 de 95 4d a8 e5 00 96 a7 0b ef d3 14 81 e1 .5..M........... 00:22:29.481 000003b0 c4 b3 c3 ef e2 12 2b bb 29 e0 b5 5b e0 c2 b9 e7 ......+.)..[.... 00:22:29.481 000003c0 3f 92 37 87 6f 76 a1 8e 6d f3 18 b6 07 c1 80 a0 ?.7.ov..m....... 00:22:29.481 000003d0 50 09 78 6f ee c3 ac 0c 20 16 22 91 37 3a 67 d6 P.xo.... .".7:g. 00:22:29.481 000003e0 94 3f 88 80 6d 6b 93 d6 2b 42 a2 68 b8 a8 3e cc .?..mk..+B.h..>. 00:22:29.481 000003f0 ce cf f7 25 64 02 05 88 10 a1 80 23 01 02 e0 48 ...%d......#...H 00:22:29.481 host pubkey: 00:22:29.481 00000000 9c 2e 5f 60 d2 d3 89 87 90 6c 26 f4 0a a0 b2 40 .._`.....l&....@ 00:22:29.481 00000010 25 c2 67 a4 9d ba 39 c6 39 b6 a2 94 5c 0f 8f 99 %.g...9.9...\... 00:22:29.481 00000020 77 f8 49 b9 c0 5f 4b 58 02 0c 39 a5 42 c3 ed 08 w.I.._KX..9.B... 00:22:29.481 00000030 c2 83 58 b2 f1 e6 6a 37 53 68 4f 72 32 97 10 60 ..X...j7ShOr2..` 00:22:29.481 00000040 d0 34 9e 50 d9 a6 33 aa 4c b2 34 87 08 f0 c0 7f .4.P..3.L.4..... 00:22:29.481 00000050 32 2b 46 61 c6 cc fc ef 4b 83 f6 69 6c 14 fe 85 2+Fa....K..il... 00:22:29.482 00000060 f7 86 75 7c 12 f9 9b 1b 88 4b e3 74 ce 0c eb 3d ..u|.....K.t...= 00:22:29.482 00000070 f5 8c f6 89 0c 73 86 fa 6d ab 3a e7 8a 25 16 0d .....s..m.:..%.. 00:22:29.482 00000080 92 3e 50 4b ff 0c 5c 8f df c3 f0 a9 f2 1c e9 c1 .>PK..\......... 00:22:29.482 00000090 8e eb 16 e5 50 93 2c 18 98 27 ef f1 f1 05 0b 40 ....P.,..'.....@ 00:22:29.482 000000a0 0e 32 c3 c1 a0 b4 5d d5 8a 44 95 ab 58 e6 91 5e .2....]..D..X..^ 00:22:29.482 000000b0 b7 c4 41 51 07 75 71 7b 50 50 fe 48 0f 28 ef f1 ..AQ.uq{PP.H.(.. 00:22:29.482 000000c0 14 35 c1 9a 7d bc 56 ca 52 0f 32 1f 4f 42 ff a1 .5..}.V.R.2.OB.. 00:22:29.482 000000d0 31 4e c1 07 8e 00 85 df c1 f4 94 60 5d 69 03 62 1N.........`]i.b 00:22:29.482 000000e0 4e 21 e7 3e 9d ff a9 dc 32 f8 ab 9a 09 b9 e5 24 N!.>....2......$ 00:22:29.482 000000f0 fd 1c 8d c2 71 a5 b1 29 fa 54 be d5 66 34 08 ab ....q..).T..f4.. 00:22:29.482 00000100 5e 2f 6e e1 7e ce 05 8e af 27 f0 b4 0e 44 41 15 ^/n.~....'...DA. 00:22:29.482 00000110 19 2e 9a 7a b9 3d 86 71 d1 f0 67 7d e3 71 b9 81 ...z.=.q..g}.q.. 00:22:29.482 00000120 82 a4 7c 1c af 8f 4f eb 9f 3e 54 c7 75 40 db bc ..|...O..>T.u@.. 00:22:29.482 00000130 32 a7 6d ff 15 be 81 9d 17 0c 96 ca 82 a4 53 80 2.m...........S. 00:22:29.482 00000140 92 6a 51 a0 e9 9d 76 24 4c 36 02 16 b0 6d bc 07 .jQ...v$L6...m.. 00:22:29.482 00000150 8f f4 ec b2 04 71 3e 36 7d 5f a6 b8 40 54 e6 40 .....q>6}_..@T.@ 00:22:29.482 00000160 b4 f6 3c 2c 9e dc db 84 8d 9a 89 ae 30 51 04 d2 ..<,........0Q.. 00:22:29.482 00000170 3d 95 b5 8f 14 e1 b9 d3 c5 98 84 b1 34 67 55 41 =...........4gUA 00:22:29.482 00000180 04 45 90 22 0b 2c 53 f3 61 4c 56 c4 ec cf 96 26 .E.".,S.aLV....& 00:22:29.482 00000190 8d 3e 84 c7 6c 9a 25 8a a3 32 cf a3 46 12 5b 30 .>..l.%..2..F.[0 00:22:29.482 000001a0 82 30 28 a0 25 93 a9 34 59 3b b0 fb c9 81 1b 3f .0(.%..4Y;.....? 00:22:29.482 000001b0 18 66 8d 41 bc 97 5d 77 e6 7a ab f6 f9 b8 21 ae .f.A..]w.z....!. 00:22:29.482 000001c0 d3 dd fb ff f8 fd 09 7c 1a 37 11 00 5d fb 2a 65 .......|.7..].*e 00:22:29.482 000001d0 ac f7 1f 7b 77 6d 23 1b 12 68 ca a8 79 f5 90 04 ...{wm#..h..y... 00:22:29.482 000001e0 ff f8 3c dc 7d 71 3d 76 3e 29 c6 63 cf 51 bd 49 ..<.}q=v>).c.Q.I 00:22:29.482 000001f0 62 bb 3e da f7 3e 7e 71 22 dc ff 63 17 be ea 20 b.>..>~q"..c... 00:22:29.482 00000200 59 25 50 4b 41 43 c1 18 b5 31 28 58 03 05 18 23 Y%PKAC...1(X...# 00:22:29.482 00000210 e6 c3 42 5e 5e 8f ea 17 86 a9 13 5e 5c 50 67 2d ..B^^......^\Pg- 00:22:29.482 00000220 06 fe cd 26 ee 4f 38 5a ca 2d 76 19 e0 a0 7f 3e ...&.O8Z.-v....> 00:22:29.482 00000230 49 91 ec ff 79 bd 93 b4 c9 e9 98 a0 e8 03 5d de I...y.........]. 00:22:29.482 00000240 16 51 08 d3 23 c7 85 6f ec 0f 32 73 a0 5f 4f 97 .Q..#..o..2s._O. 00:22:29.482 00000250 28 cc a5 bc 5d 75 98 ec d6 11 5c 23 77 aa 1c 5e (...]u....\#w..^ 00:22:29.482 00000260 07 3e 1b 76 32 42 0a 51 56 c1 cb 2f b6 65 b3 90 .>.v2B.QV../.e.. 00:22:29.482 00000270 af 1d 9b 9b ab 51 5c 7b b8 30 c7 ea 81 e4 88 2c .....Q\{.0....., 00:22:29.482 00000280 af 8a 63 67 5c 7e dd 07 45 cd d8 5f 4d 98 61 80 ..cg\~..E.._M.a. 00:22:29.482 00000290 a4 fd 60 57 f8 09 c2 b0 57 33 56 21 85 16 a6 5e ..`W....W3V!...^ 00:22:29.482 000002a0 b9 00 07 47 a3 d0 25 6f 0c a7 17 42 51 44 9b cb ...G..%o...BQD.. 00:22:29.482 000002b0 eb 68 eb 49 0f 32 40 76 f4 9c 5c ee 76 9d f3 b2 .h.I.2@v..\.v... 00:22:29.482 000002c0 74 9b 9e 32 d3 66 13 91 d3 a9 28 b5 90 a0 72 68 t..2.f....(...rh 00:22:29.482 000002d0 dc e1 f8 1c 02 ff 70 26 3a f8 63 4c 39 3d f4 4f ......p&:.cL9=.O 00:22:29.482 000002e0 c2 85 92 81 cb 6d 4d 48 eb 53 c2 72 6a 7a c4 12 .....mMH.S.rjz.. 00:22:29.482 000002f0 56 89 57 2b 10 b9 d7 12 dd 8a 1c 1e 06 49 91 6d V.W+.........I.m 00:22:29.482 00000300 bf ce 27 cc 51 6a e2 29 be a6 d0 e1 bd 4f 40 c5 ..'.Qj.).....O@. 00:22:29.482 00000310 74 b5 0a b3 3e ac 29 36 ce ad c0 95 0d 5c 01 62 t...>.)6.....\.b 00:22:29.482 00000320 42 86 5a 39 1b d1 ea ed 8f fe 40 73 26 dc 8c dd B.Z9......@s&... 00:22:29.482 00000330 64 c1 8f 6e 96 45 6c 09 53 46 95 73 7c 9e fb ec d..n.El.SF.s|... 00:22:29.482 00000340 81 a1 4a 31 0d 23 13 50 bb f6 5c 75 a2 13 1a fa ..J1.#.P..\u.... 00:22:29.482 00000350 52 8b 6e c1 15 f9 24 b5 81 a9 b8 23 7e ba 1d 23 R.n...$....#~..# 00:22:29.482 00000360 62 32 8b 82 81 56 f0 7e 81 ad 4e f5 89 ab 85 89 b2...V.~..N..... 00:22:29.482 00000370 7b fc df 5c b3 a8 4a 39 6e 87 c9 c5 5c 6d 6c d7 {..\..J9n...\ml. 00:22:29.482 00000380 27 3f dc ae 83 af ae 92 38 37 2a a4 56 73 92 44 '?......87*.Vs.D 00:22:29.482 00000390 ee 53 30 17 ae 94 66 43 bd b8 4a e4 d9 6a dd 77 .S0...fC..J..j.w 00:22:29.482 000003a0 f6 d1 57 94 3b 06 b5 9f 04 88 af 53 55 8d 3b 61 ..W.;......SU.;a 00:22:29.482 000003b0 5d 6a 47 e7 7c e6 56 70 7e 6c 38 b2 b4 f3 49 bb ]jG.|.Vp~l8...I. 00:22:29.482 000003c0 af 72 ad 10 2a 7f af 22 54 2a ed cb 2c 97 07 90 .r..*.."T*..,... 00:22:29.482 000003d0 a5 96 6c 01 82 23 98 80 39 fe 9d 8b 4d 6c d7 4b ..l..#..9...Ml.K 00:22:29.482 000003e0 3c c0 a2 f3 6f 7b 6d 96 86 d3 f6 a2 99 44 72 3b <...o{m......Dr; 00:22:29.482 000003f0 2e 06 60 68 5c 34 a7 6c 47 ea ab 59 5f c1 66 82 ..`h\4.lG..Y_.f. 00:22:29.482 dh secret: 00:22:29.482 00000000 f5 5b 2b b9 8a cc fd 92 60 a3 f9 66 55 07 54 73 .[+.....`..fU.Ts 00:22:29.482 00000010 04 09 6d e3 0b 45 11 5a 03 bc 82 13 f1 49 47 e5 ..m..E.Z.....IG. 00:22:29.482 00000020 82 0a 5a d7 8e ae 69 aa a5 50 9b 57 f1 02 c5 b8 ..Z...i..P.W.... 00:22:29.482 00000030 f7 de 5c 8a 68 c7 04 53 37 a8 45 51 1c e9 56 c9 ..\.h..S7.EQ..V. 00:22:29.482 00000040 7a 63 42 71 1e b6 a2 e3 2e 85 cd 06 dd f4 34 d3 zcBq..........4. 00:22:29.482 00000050 c3 44 2c 67 74 69 fc 31 09 b4 33 80 42 1e a1 f2 .D,gti.1..3.B... 00:22:29.482 00000060 c5 25 89 b6 7a f7 da 06 24 43 03 0b a9 f8 16 8b .%..z...$C...... 00:22:29.482 00000070 45 70 77 60 8b 7a 47 c4 3c a1 0d 19 17 42 2e 02 Epw`.zG.<....B.. 00:22:29.482 00000080 e1 09 cc 9a ba 45 a2 48 c5 70 3d 90 f4 74 92 0a .....E.H.p=..t.. 00:22:29.482 00000090 96 bf c0 ca fb 4d e6 0b 64 8c 80 3c e3 12 9b c2 .....M..d..<.... 00:22:29.482 000000a0 75 8e df c2 61 67 3f e7 a5 a6 2a 53 01 7b bb 51 u...ag?...*S.{.Q 00:22:29.482 000000b0 f9 e9 ad 4b 87 59 a7 34 e5 00 39 f8 2b fe cb 6f ...K.Y.4..9.+..o 00:22:29.482 000000c0 52 e5 8a 4c c4 10 ac a6 76 cb 30 ba f7 d7 80 7b R..L....v.0....{ 00:22:29.482 000000d0 93 9b 01 8a ea 7c 6b 82 84 60 33 c3 d3 11 fb 8a .....|k..`3..... 00:22:29.482 000000e0 58 ec f3 c3 39 32 de 12 bb 9e 0c 3d 63 20 a4 5b X...92.....=c .[ 00:22:29.482 000000f0 c5 49 c0 0e da ba 5d de 06 92 1e 6d ab ec 54 72 .I....]....m..Tr 00:22:29.482 00000100 59 37 b1 74 74 2c 05 57 48 35 ed 87 f9 4b c5 65 Y7.tt,.WH5...K.e 00:22:29.482 00000110 91 55 6c f2 fb 15 50 9b 1d 41 ab a5 6c 81 72 2f .Ul...P..A..l.r/ 00:22:29.482 00000120 a2 89 57 38 7f c7 25 91 8f 21 2c e4 1d ca 4d 5a ..W8..%..!,...MZ 00:22:29.482 00000130 af ed 80 26 9d cb 78 db 48 73 d8 ce a8 9a c7 6c ...&..x.Hs.....l 00:22:29.482 00000140 5d 19 2d 13 fb 1e 60 31 4b 2c e1 65 70 bc 3d cc ].-...`1K,.ep.=. 00:22:29.482 00000150 3c 34 1d 3f 94 29 51 cb 92 9e 46 04 80 1d 8b 59 <4.?.)Q...F....Y 00:22:29.482 00000160 3e 9b 9f 9a 49 00 07 50 62 e4 64 89 7e 9b 96 45 >...I..Pb.d.~..E 00:22:29.482 00000170 1f 6c 06 50 c8 61 b6 e1 63 c2 45 6c f5 f0 9e a9 .l.P.a..c.El.... 00:22:29.482 00000180 cb 4c 18 f3 02 0c eb 02 69 9b a1 b7 3d e0 81 94 .L......i...=... 00:22:29.482 00000190 a4 df c3 78 8f 08 1e f6 1c 02 2d 2f 13 a8 b3 ba ...x......-/.... 00:22:29.482 000001a0 c2 01 8c fd 1a 33 bd 71 c9 84 d5 02 d6 c8 89 11 .....3.q........ 00:22:29.482 000001b0 9b 9a b4 f3 1b b9 5a 31 1c bb d2 e7 0c 0e bc 5a ......Z1.......Z 00:22:29.482 000001c0 d4 69 65 65 14 77 e8 35 21 ca 55 45 49 c4 32 45 .iee.w.5!.UEI.2E 00:22:29.482 000001d0 59 fe e0 7b 2a 15 8f cd 23 3a fb 1f ea ec f9 43 Y..{*...#:.....C 00:22:29.482 000001e0 e2 89 5f 47 db 9a 59 3d f8 3b 08 b1 6b ec f5 bd .._G..Y=.;..k... 00:22:29.482 000001f0 d5 a9 9b a9 a2 88 66 e3 90 31 fc 38 42 d8 ff 55 ......f..1.8B..U 00:22:29.482 00000200 c7 a5 8b 15 fd 24 ca c9 8a b1 7e be 1a 09 ac e2 .....$....~..... 00:22:29.482 00000210 1f 5b ae 38 bb 13 fa 3c 09 56 36 a4 ae 4f ff a0 .[.8...<.V6..O.. 00:22:29.482 00000220 5e 04 0d 9c 8c 87 6d 40 ee 6e f5 45 aa 57 75 31 ^.....m@.n.E.Wu1 00:22:29.482 00000230 b9 19 c0 3a bc ff 94 18 f8 ef 01 4d 37 1d 52 ad ...:.......M7.R. 00:22:29.482 00000240 95 08 d3 7b d3 7c 8d 01 25 bb 65 97 73 23 be 0e ...{.|..%.e.s#.. 00:22:29.482 00000250 d3 cb ba b0 a4 75 3a 36 43 b6 6f 4e 69 5c 7e bc .....u:6C.oNi\~. 00:22:29.482 00000260 23 25 70 7b 12 5f 7f 95 2d 61 bf 93 8d d6 7b 8a #%p{._..-a....{. 00:22:29.482 00000270 47 7b 57 0c 25 d4 c8 a5 db 5f 28 97 46 71 0c 01 G{W.%...._(.Fq.. 00:22:29.482 00000280 b8 c5 50 c2 22 d5 d6 17 9c 41 7b ec d9 88 a4 71 ..P."....A{....q 00:22:29.482 00000290 f9 d2 6a 84 90 42 2b 07 52 2d d2 30 dd 5d f4 99 ..j..B+.R-.0.].. 00:22:29.482 000002a0 57 9e eb ca e7 fa 25 5a 6a c1 ff fd 1f ea d2 66 W.....%Zj......f 00:22:29.482 000002b0 8c 9d e6 33 ed e7 35 03 8d 3d f8 66 2a fb 1a a3 ...3..5..=.f*... 00:22:29.482 000002c0 aa 8c 11 4a 95 6b ef fe 13 62 35 de 86 a1 d9 6a ...J.k...b5....j 00:22:29.482 000002d0 ff 14 d1 26 4c c8 69 56 d9 96 63 e6 62 13 e6 f2 ...&L.iV..c.b... 00:22:29.482 000002e0 f9 bd 97 b8 1c 39 76 f1 a2 92 c2 a0 18 b9 04 53 .....9v........S 00:22:29.482 000002f0 08 f4 3a d0 a9 d8 4c bf f7 0c 18 c0 12 42 09 6d ..:...L......B.m 00:22:29.482 00000300 1e df 96 2e 6a 1b df cc 5d 8f 5f 01 69 68 93 ad ....j...]._.ih.. 00:22:29.482 00000310 d4 76 ca d5 6f 25 aa 78 bd 2f bb 74 57 ea c8 88 .v..o%.x./.tW... 00:22:29.482 00000320 26 2a eb ed f5 6f 99 71 06 53 1e 6e 9d 91 f5 6f &*...o.q.S.n...o 00:22:29.482 00000330 be 23 db 48 dd a3 1e 53 8e 4d 2c 02 c5 e4 37 e7 .#.H...S.M,...7. 00:22:29.482 00000340 3e 86 c8 bf af 6f 22 3d ab f3 50 48 e4 27 b4 4c >....o"=..PH.'.L 00:22:29.482 00000350 71 09 bf 5c 99 09 21 d3 6f cd df a7 b2 bd 6a 06 q..\..!.o.....j. 00:22:29.482 00000360 1d 40 60 ce 4f 30 ed a3 6e 7e aa 96 51 fd ed a5 .@`.O0..n~..Q... 00:22:29.482 00000370 12 4f 8d a1 5b 32 b2 50 6a 24 f3 9b e8 c8 d8 82 .O..[2.Pj$...... 00:22:29.482 00000380 30 00 f3 ca 9a 94 02 62 53 04 43 c5 e5 fb 5a b1 0......bS.C...Z. 00:22:29.482 00000390 a0 c3 82 ae 65 60 03 3f d7 0a 79 09 ea f5 16 20 ....e`.?..y.... 00:22:29.482 000003a0 ae 9e eb f8 fa f2 71 ea a2 da 26 e5 3c 37 fe 47 ......q...&.<7.G 00:22:29.482 000003b0 36 d5 e3 c8 8f 53 ac 92 44 80 a1 62 6c fb 94 26 6....S..D..bl..& 00:22:29.482 000003c0 92 13 dc 67 28 e9 6f b6 9d b7 a7 17 9e a0 96 f8 ...g(.o......... 00:22:29.482 000003d0 a3 00 52 b1 ed 51 79 56 d5 f6 cf 75 3e c0 4e aa ..R..QyV...u>.N. 00:22:29.482 000003e0 12 32 56 2b dd 34 ed 54 61 d5 e5 20 5c 89 61 6d .2V+.4.Ta.. \.am 00:22:29.482 000003f0 eb ff 0f bb ab e1 45 94 ca f0 86 7d 27 63 62 0d ......E....}'cb. 00:22:29.482 [2024-09-27 13:27:12.016233] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key4, hash=1, dhgroup=5, seq=3775755222, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.482 [2024-09-27 13:27:12.016530] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.482 [2024-09-27 13:27:12.102238] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.482 [2024-09-27 13:27:12.102810] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.482 [2024-09-27 13:27:12.102926] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.482 [2024-09-27 13:27:12.206164] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.482 [2024-09-27 13:27:12.206458] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.482 [2024-09-27 13:27:12.206762] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.483 [2024-09-27 13:27:12.206917] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.483 [2024-09-27 13:27:12.207194] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.483 ctrlr pubkey: 00:22:29.483 00000000 55 e8 75 ff 99 8b d8 a6 4b 96 f9 9a 9c 82 71 3f U.u.....K.....q? 00:22:29.483 00000010 f8 95 aa d5 02 5d eb 56 cd 86 4e f8 6c 8e fb 8d .....].V..N.l... 00:22:29.483 00000020 67 89 69 bb 01 61 5d 06 88 ae 69 d3 ad 6f cc b8 g.i..a]...i..o.. 00:22:29.483 00000030 a5 09 ad 90 f1 aa ff 4e 9f b8 79 48 40 67 f2 b2 .......N..yH@g.. 00:22:29.483 00000040 ab 29 26 01 8f ab a8 fe c9 dd 8f b6 fd eb cc 7f .)&............. 00:22:29.483 00000050 f8 80 11 ab 2d b2 0a 73 23 1b 7b c9 7a f9 74 ca ....-..s#.{.z.t. 00:22:29.483 00000060 1f e8 43 6d c5 5d 9c b4 1d 25 dc 3e bd 04 92 4a ..Cm.]...%.>...J 00:22:29.483 00000070 6e 76 db 9d 4a 76 e0 28 5f e0 5f 60 42 1e 23 52 nv..Jv.(_._`B.#R 00:22:29.483 00000080 0e 98 33 40 77 55 eb a6 24 3d 15 59 33 aa d9 a1 ..3@wU..$=.Y3... 00:22:29.483 00000090 df fd a7 cd b0 b4 37 51 7a ec f4 ac 8c 54 bc 7c ......7Qz....T.| 00:22:29.483 000000a0 ce d6 40 b0 23 c2 f8 ce 83 9b 66 13 45 2f dd cc ..@.#.....f.E/.. 00:22:29.483 000000b0 10 e2 80 e1 36 c5 d6 c3 ab d7 85 ee b4 01 0c aa ....6........... 00:22:29.483 000000c0 47 9e 7c 54 66 1d 60 ab d8 35 2b 87 72 19 83 9b G.|Tf.`..5+.r... 00:22:29.483 000000d0 df 1c 87 b3 44 8d 40 97 e7 58 e5 1f 62 a2 30 9e ....D.@..X..b.0. 00:22:29.483 000000e0 50 3a 5c 3e 60 c2 59 2b a6 cf 5c 46 ff 55 29 ca P:\>`.Y+..\F.U). 00:22:29.483 000000f0 03 99 32 b2 2d 47 0c 85 69 06 d1 04 b5 fa b9 c5 ..2.-G..i....... 00:22:29.483 host pubkey: 00:22:29.483 00000000 11 39 84 4f b6 dc 60 ce 50 fc e8 e3 6b 26 83 41 .9.O..`.P...k&.A 00:22:29.483 00000010 97 bd b7 68 0d 64 cc af ab 23 5f ce 53 6f 4e da ...h.d...#_.SoN. 00:22:29.483 00000020 56 79 4f 29 6f 1c 57 80 3f c1 e4 b4 04 d6 59 c1 VyO)o.W.?.....Y. 00:22:29.483 00000030 8f de 52 4a 81 9a 99 76 a8 95 e7 e7 09 04 c0 f8 ..RJ...v........ 00:22:29.483 00000040 2a 21 ae d5 af c7 41 67 18 d8 3c ad ed 9d ad ec *!....Ag..<..... 00:22:29.483 00000050 e6 11 43 2b 80 0d 2d ac 9c 0e 18 8d 65 14 98 1c ..C+..-.....e... 00:22:29.483 00000060 0b 46 e8 40 3d cf 47 9f 57 ae 9c 50 3f 59 70 08 .F.@=.G.W..P?Yp. 00:22:29.483 00000070 a8 9c a7 af a7 bb 4a 76 ac aa 98 07 21 5e 58 59 ......Jv....!^XY 00:22:29.483 00000080 69 25 58 9a 68 c2 27 33 64 2c 00 e9 b8 86 bc 63 i%X.h.'3d,.....c 00:22:29.483 00000090 73 7d c0 9e e9 e3 ea 4b d0 fd 6d af 18 e2 fb 41 s}.....K..m....A 00:22:29.483 000000a0 26 d0 56 d1 24 db 81 fe 64 08 29 27 b8 b1 16 dd &.V.$...d.)'.... 00:22:29.483 000000b0 a5 01 15 09 1e ca b2 0a 76 d5 b9 9f 55 1b 73 0e ........v...U.s. 00:22:29.483 000000c0 73 04 8a 1c 2f 96 ed 48 c6 9a b9 48 b0 2f 4f 29 s.../..H...H./O) 00:22:29.483 000000d0 6b 19 21 fc d4 6b 12 94 e5 f1 bd 06 30 7f 42 39 k.!..k......0.B9 00:22:29.483 000000e0 bb e2 6e ae a5 b7 59 e0 56 5b cf 10 df 8e 98 64 ..n...Y.V[.....d 00:22:29.483 000000f0 f5 f7 6f cc 06 c7 e7 f5 5e 38 f9 85 e4 fa 0e 54 ..o.....^8.....T 00:22:29.483 dh secret: 00:22:29.483 00000000 4f 70 34 50 a3 65 3a df 52 2d ff 88 49 cd f1 81 Op4P.e:.R-..I... 00:22:29.483 00000010 14 91 6d 55 d0 9a b5 e9 3e 54 da df 7a 45 c5 98 ..mU....>T..zE.. 00:22:29.483 00000020 fe 74 97 54 be 96 7f 69 ee 7d 66 fa 2d 3c 75 de .t.T...i.}f.-...J 00:22:29.483 00000070 6e 76 db 9d 4a 76 e0 28 5f e0 5f 60 42 1e 23 52 nv..Jv.(_._`B.#R 00:22:29.483 00000080 0e 98 33 40 77 55 eb a6 24 3d 15 59 33 aa d9 a1 ..3@wU..$=.Y3... 00:22:29.483 00000090 df fd a7 cd b0 b4 37 51 7a ec f4 ac 8c 54 bc 7c ......7Qz....T.| 00:22:29.483 000000a0 ce d6 40 b0 23 c2 f8 ce 83 9b 66 13 45 2f dd cc ..@.#.....f.E/.. 00:22:29.483 000000b0 10 e2 80 e1 36 c5 d6 c3 ab d7 85 ee b4 01 0c aa ....6........... 00:22:29.483 000000c0 47 9e 7c 54 66 1d 60 ab d8 35 2b 87 72 19 83 9b G.|Tf.`..5+.r... 00:22:29.483 000000d0 df 1c 87 b3 44 8d 40 97 e7 58 e5 1f 62 a2 30 9e ....D.@..X..b.0. 00:22:29.483 000000e0 50 3a 5c 3e 60 c2 59 2b a6 cf 5c 46 ff 55 29 ca P:\>`.Y+..\F.U). 00:22:29.483 000000f0 03 99 32 b2 2d 47 0c 85 69 06 d1 04 b5 fa b9 c5 ..2.-G..i....... 00:22:29.483 host pubkey: 00:22:29.483 00000000 22 be c9 26 29 c7 e1 d4 aa 98 b7 e1 a0 1b 27 3f "..&).........'? 00:22:29.483 00000010 d5 e7 54 22 5e d1 1c bb f9 d9 d4 b4 05 59 ed 3c ..T"^........Y.< 00:22:29.483 00000020 36 fc fa 3d 8c 82 39 a4 06 5c 2f 84 5e 85 66 03 6..=..9..\/.^.f. 00:22:29.483 00000030 ba bf 21 01 ec 36 06 cb e1 e4 82 d5 d8 bd 97 59 ..!..6.........Y 00:22:29.483 00000040 de 12 4d 77 54 bf 25 fa c7 1d b9 dd 7f fe 26 68 ..MwT.%.......&h 00:22:29.483 00000050 94 5f 0b 72 39 4c ab b8 4b ae e5 27 12 a6 5b b4 ._.r9L..K..'..[. 00:22:29.483 00000060 d0 39 ea 59 47 71 11 29 f6 ca ca 5b 50 71 46 72 .9.YGq.)...[PqFr 00:22:29.483 00000070 d0 39 aa 27 e5 fe 3f d7 42 bb 1b ed f9 d4 61 58 .9.'..?.B.....aX 00:22:29.483 00000080 99 95 d4 f2 5c 2e 5b 7a 9c 5b 8b 35 ae de a9 62 ....\.[z.[.5...b 00:22:29.483 00000090 00 18 9b 56 04 0f b8 fb 80 b2 ca b4 b8 1f 83 b7 ...V............ 00:22:29.483 000000a0 60 6e 04 be d1 20 9c 5f 5d c6 72 58 8c d8 82 0e `n... ._].rX.... 00:22:29.483 000000b0 7c 08 ff 8a 66 70 cd 5b 24 78 16 9f d5 62 17 78 |...fp.[$x...b.x 00:22:29.483 000000c0 3c 67 e9 25 ad b2 94 4d de 03 6e d3 bc 1a 37 01 ..K..Vs..%[a 00:22:29.484 00000040 d2 04 d2 09 fc 83 8a fb d7 06 7e ac 7a 86 d1 7e ..........~.z..~ 00:22:29.484 00000050 66 bb e6 cd 25 e9 5a e7 6d 61 db bb 6a ec ca 29 f...%.Z.ma..j..) 00:22:29.484 00000060 93 9f 80 9e 44 a8 bf c8 1d b6 21 56 06 44 2e d7 ....D.....!V.D.. 00:22:29.484 00000070 8d a8 af 71 5f 8f 4b b0 c9 68 a5 23 6f 11 80 08 ...q_.K..h.#o... 00:22:29.484 00000080 b9 03 87 eb ff bb b1 34 09 16 15 79 62 a6 d5 96 .......4...yb... 00:22:29.484 00000090 69 d4 93 3e ee 39 cc 02 88 f6 ee 5d 0e 5e 94 18 i..>.9.....].^.. 00:22:29.484 000000a0 6e b7 79 d8 7e f5 38 87 5c b0 db 35 55 f7 c6 b0 n.y.~.8.\..5U... 00:22:29.484 000000b0 28 07 b4 d0 0a 4e 9d 08 a5 0e a8 0f 40 50 da 34 (....N......@P.4 00:22:29.484 000000c0 35 fc 1b c3 72 f5 76 84 b8 cc 86 3f b7 74 8e 0c 5...r.v....?.t.. 00:22:29.484 000000d0 25 09 8c 0e f1 e3 24 a8 97 c5 85 53 82 d7 4c 29 %.....$....S..L) 00:22:29.484 000000e0 f4 9a bd 5d 1d 32 07 7b c1 62 55 8b fe 65 99 5d ...].2.{.bU..e.] 00:22:29.484 000000f0 a8 b8 38 a5 0d 3e f2 02 0c 62 ce 61 3a da ce 6a ..8..>...b.a:..j 00:22:29.484 dh secret: 00:22:29.484 00000000 00 67 55 11 27 cf 04 23 09 35 1f 50 0f 22 39 81 .gU.'..#.5.P."9. 00:22:29.484 00000010 50 75 fd 8e 10 38 66 d9 7c 80 17 c3 60 e6 4d f7 Pu...8f.|...`.M. 00:22:29.484 00000020 5c 21 0c 42 ba 74 55 cc 5a 7c 9b 11 1d cc 17 c3 \!.B.tU.Z|...... 00:22:29.484 00000030 4c d4 f4 44 cb f5 6a e2 f5 58 98 4f 70 f0 f3 c2 L..D..j..X.Op... 00:22:29.484 00000040 4c de 0d 1e 0d ab b0 fa 90 6a cb 8d 5c 4e 0d 7b L........j..\N.{ 00:22:29.484 00000050 9d bf 08 de 4e 62 87 ab 1f 41 1d 5d ad 04 7f 7c ....Nb...A.]...| 00:22:29.484 00000060 45 cf d7 97 4b 91 23 19 ad b0 42 bc 9c d3 7d 67 E...K.#...B...}g 00:22:29.484 00000070 f2 8f 3d 39 22 33 cb 64 3c e0 88 86 36 b0 2b 7b ..=9"3.d<...6.+{ 00:22:29.484 00000080 dc f4 b4 94 84 08 9c 1e 32 79 6c f5 cd cc f2 b3 ........2yl..... 00:22:29.484 00000090 05 87 3d 6f 07 c3 94 d1 4a 47 f1 54 bf 04 4d 18 ..=o....JG.T..M. 00:22:29.484 000000a0 65 67 3b 0f 74 6e 94 d7 64 07 5f 05 ac b9 7d 95 eg;.tn..d._...}. 00:22:29.484 000000b0 67 ce 98 fd 15 6e 2c 95 ab aa ef 3c b8 88 95 b2 g....n,....<.... 00:22:29.484 000000c0 3c 95 2a 36 4b 8b a5 24 24 4b 0f 7f db ce 59 dd <.*6K..$$K....Y. 00:22:29.484 000000d0 26 76 8a 01 a0 3a 93 bb 83 25 c0 3a e3 ac 62 34 &v...:...%.:..b4 00:22:29.484 000000e0 25 bc 8e 2d ce 87 c9 46 e9 25 3f 80 78 e1 4a 47 %..-...F.%?.x.JG 00:22:29.484 000000f0 eb 83 47 7c c7 77 70 b0 35 3a 0a b5 45 ac 34 77 ..G|.wp.5:..E.4w 00:22:29.484 [2024-09-27 13:27:12.389466] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=2, dhgroup=1, seq=3775755225, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.484 [2024-09-27 13:27:12.389765] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.484 [2024-09-27 13:27:12.393798] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.484 [2024-09-27 13:27:12.394056] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.484 [2024-09-27 13:27:12.394278] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.484 [2024-09-27 13:27:12.394455] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.484 [2024-09-27 13:27:12.446109] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.484 [2024-09-27 13:27:12.446360] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.484 [2024-09-27 13:27:12.446471] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 1 (ffdhe2048) 00:22:29.484 [2024-09-27 13:27:12.446592] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.484 [2024-09-27 13:27:12.446846] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.484 ctrlr pubkey: 00:22:29.484 00000000 7c 84 a6 4b 45 29 6d 86 fb 71 89 21 33 6d 8b dc |..KE)m..q.!3m.. 00:22:29.484 00000010 3b 5a 50 5a b0 9e e6 82 5d cf 4a bc e3 0c 6e b5 ;ZPZ....].J...n. 00:22:29.484 00000020 c5 dc 17 c5 76 82 3a 39 8a 9a 52 64 61 ec a9 62 ....v.:9..Rda..b 00:22:29.484 00000030 b6 5d e4 16 f6 64 f2 45 31 8a 8a 53 1b a5 08 33 .]...d.E1..S...3 00:22:29.484 00000040 2e 94 e5 f1 1a 99 5d ed 8a a8 1b 0d 15 5d 8d bd ......]......].. 00:22:29.484 00000050 66 eb 14 ed 78 98 0d 8e 66 9c e2 07 07 b1 3b d6 f...x...f.....;. 00:22:29.484 00000060 c9 9b 32 0b c5 50 78 96 2f 10 1d 4a 58 30 a2 44 ..2..Px./..JX0.D 00:22:29.484 00000070 82 e8 e6 a8 3a 5d 89 71 5b 80 80 82 ee b5 69 fc ....:].q[.....i. 00:22:29.484 00000080 20 c7 a6 dd 64 d4 ac 89 5e 99 d1 4d 3c 1a 36 1d ...d...^..M<.6. 00:22:29.484 00000090 81 1f a7 ba 25 e6 e4 1b ba d9 7c d7 76 74 ba dc ....%.....|.vt.. 00:22:29.484 000000a0 5d dc be f5 d8 a3 dd 2f c4 23 29 be cc 83 fa 10 ]....../.#)..... 00:22:29.484 000000b0 b8 77 2a ff 66 99 77 0d 9d d6 5a ad 49 a9 6b 76 .w*.f.w...Z.I.kv 00:22:29.484 000000c0 70 ff 76 5e 25 ba 55 6e b3 79 ff ff d7 9f c2 05 p.v^%.Un.y...... 00:22:29.484 000000d0 6b a5 66 bd 8d 0a 76 f2 ab b8 9a a0 74 e2 d1 f8 k.f...v.....t... 00:22:29.484 000000e0 52 ba 4e 92 04 af 3a 43 54 90 15 a6 c5 78 e8 2f R.N...:CT....x./ 00:22:29.484 000000f0 8c 53 52 29 c3 f5 c4 ce b4 b7 1e 88 9b 71 3b da .SR).........q;. 00:22:29.484 host pubkey: 00:22:29.484 00000000 3d 8b c5 d0 97 e9 a9 c1 26 e8 0e 2d 13 2a 56 f5 =.......&..-.*V. 00:22:29.484 00000010 b7 af fa 93 af bf d8 61 b1 fe 93 57 91 8e 34 03 .......a...W..4. 00:22:29.484 00000020 f0 cb 1f a1 ab 9b 50 45 56 84 32 5a 59 0e f2 5b ......PEV.2ZY..[ 00:22:29.484 00000030 5b 37 4b de 1a e4 d5 e4 0b f0 90 bb a8 0e b6 f0 [7K............. 00:22:29.484 00000040 13 d2 55 43 ee b7 ae e0 11 cd 84 53 1b 4e cf 93 ..UC.......S.N.. 00:22:29.484 00000050 9e be 3a be b4 51 ff 8c 41 e4 f0 c3 4a 3e 4b a4 ..:..Q..A...J>K. 00:22:29.484 00000060 ba f6 22 56 2d 98 e9 d0 29 09 80 37 5e 89 b1 af .."V-...)..7^... 00:22:29.484 00000070 5c ec eb ea c0 93 9a 82 f8 07 dd 58 84 91 3e c9 \..........X..>. 00:22:29.484 00000080 39 8e 9a ee fa ad fa 29 b5 12 68 a6 44 9b 0a f3 9......)..h.D... 00:22:29.484 00000090 b9 14 f7 b2 70 ea 73 8e 05 8d 1a 37 92 97 b6 17 ....p.s....7.... 00:22:29.484 000000a0 8b 6a 73 f5 9d a0 eb fc a6 43 66 f3 ac af c0 8d .js......Cf..... 00:22:29.484 000000b0 23 82 0f 5f 73 82 fa f9 0b cc b1 c7 64 fe 71 61 #.._s.......d.qa 00:22:29.484 000000c0 87 63 a9 fa 8d 7b 6d 9b 57 c6 27 a3 e4 aa a6 af .c...{m.W.'..... 00:22:29.484 000000d0 60 9f f3 37 fa 8e 04 82 83 6d 03 a3 a6 1e 9e 1d `..7.....m...... 00:22:29.484 000000e0 0b bd 15 e8 fe 47 44 dc 0c f0 36 af 70 d8 79 0f .....GD...6.p.y. 00:22:29.484 000000f0 f4 07 5e 5f 3b c1 81 95 2e f9 88 b5 c4 ed 36 22 ..^_;.........6" 00:22:29.484 dh secret: 00:22:29.484 00000000 c5 1b 17 f1 03 6e a6 cc 47 73 e3 3e c1 46 b6 f9 .....n..Gs.>.F.. 00:22:29.484 00000010 07 82 eb c0 14 3c 5a 9a 0d fa 91 7e b6 85 b1 49 ........ 00:22:29.484 00000060 2d 4c b3 b9 fd 4e 41 01 12 85 ba 4f 8b 0d 0d 69 -L...NA....O...i 00:22:29.484 00000070 b3 3e 68 a1 41 9a 2a 04 55 70 3d d7 cd 53 b4 0e .>h.A.*.Up=..S.. 00:22:29.484 00000080 a1 98 a3 35 6c 58 f7 43 c4 24 02 e9 bc b0 6f e3 ...5lX.C.$....o. 00:22:29.484 00000090 a1 dd 29 9f b6 5b be 3e b0 04 d6 94 db c8 5c b3 ..)..[.>......\. 00:22:29.484 000000a0 5c a1 3a a8 ef a4 62 c8 4c 58 30 99 09 40 b4 f8 \.:...b.LX0..@.. 00:22:29.484 000000b0 ee dd d7 43 33 e2 bb b0 8f 46 0b 02 50 39 22 e7 ...C3....F..P9". 00:22:29.484 000000c0 8c c0 dc ba f9 62 d6 03 ea b6 65 e6 07 c6 f5 71 .....b....e....q 00:22:29.484 000000d0 68 a2 fd 30 56 e3 30 b5 0b 7a e5 4a 88 c3 ba 5e h..0V.0..z.J...^ 00:22:29.484 000000e0 8f a6 88 fa 98 a9 d3 4c 1d d9 1f 22 7b df 22 9c .......L..."{.". 00:22:29.485 000000f0 d5 68 f2 9b 75 0b f5 ec 3c e4 05 87 f3 01 77 27 .h..u...<.....w' 00:22:29.485 [2024-09-27 13:27:12.453054] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key1, hash=2, dhgroup=1, seq=3775755226, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.485 [2024-09-27 13:27:12.453344] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.485 [2024-09-27 13:27:12.457385] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.485 [2024-09-27 13:27:12.457690] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.485 [2024-09-27 13:27:12.457847] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.485 [2024-09-27 13:27:12.458136] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.485 [2024-09-27 13:27:12.557217] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.485 [2024-09-27 13:27:12.557452] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.485 [2024-09-27 13:27:12.557619] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.485 [2024-09-27 13:27:12.557829] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.485 [2024-09-27 13:27:12.558046] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.485 ctrlr pubkey: 00:22:29.485 00000000 15 8c e0 74 03 83 d6 61 14 90 1f a8 c0 d6 8c 7c ...t...a.......| 00:22:29.485 00000010 d3 0d bc 04 47 bd cd ee d5 93 08 d2 6d 46 f9 2b ....G.......mF.+ 00:22:29.485 00000020 07 49 40 f8 f2 64 78 0a 31 3c 0d 13 cc 2a c5 21 .I@..dx.1<...*.! 00:22:29.485 00000030 24 fe 0f 18 04 06 15 e5 1c c6 47 e4 6b cf 73 cd $.........G.k.s. 00:22:29.485 00000040 b7 cc d7 bc 32 f8 d1 bf c2 7b 3b 57 8a bf 96 48 ....2....{;W...H 00:22:29.485 00000050 eb 7b a1 50 ac 44 b8 63 75 bf ca 5f f3 a9 37 44 .{.P.D.cu.._..7D 00:22:29.485 00000060 96 d8 fd 80 06 22 ae 79 ac 94 9a c7 db 98 5d e0 .....".y......]. 00:22:29.485 00000070 01 59 56 02 6c fc b1 e9 85 d3 35 5f d3 36 e7 f4 .YV.l.....5_.6.. 00:22:29.485 00000080 f5 62 df 16 16 48 c6 2e dc 00 8f b7 06 97 9d ec .b...H.......... 00:22:29.485 00000090 5b 2b e1 4e 7f e3 19 fe 71 ac 9d 19 31 4f cb c5 [+.N....q...1O.. 00:22:29.485 000000a0 c3 74 96 74 c0 45 38 65 f2 92 ed ae 54 35 79 27 .t.t.E8e....T5y' 00:22:29.485 000000b0 af 49 4f 9e 11 11 c6 5f 69 ae a4 da 22 88 1d 81 .IO...._i..."... 00:22:29.485 000000c0 1f f4 9f 91 35 76 55 a4 34 da ad 67 f9 0f 28 18 ....5vU.4..g..(. 00:22:29.485 000000d0 6d 07 af 7a d3 8c f0 bd 43 d2 23 92 df f1 69 ec m..z....C.#...i. 00:22:29.485 000000e0 31 5a 90 54 99 70 ba 55 56 d0 c8 a6 8a 62 72 cb 1Z.T.p.UV....br. 00:22:29.485 000000f0 62 6a c2 e6 e5 30 ac 6b 9b 57 c0 55 6c e2 a6 57 bj...0.k.W.Ul..W 00:22:29.485 host pubkey: 00:22:29.485 00000000 b7 dc ad bb 0d dd 51 c7 7f 8d c4 36 a9 13 02 a1 ......Q....6.... 00:22:29.485 00000010 dd 83 29 fe e9 92 3c c1 fc 43 6b b7 60 88 29 53 ..)...<..Ck.`.)S 00:22:29.485 00000020 20 93 24 53 85 2f 62 7b 29 c7 ec d2 bf 28 0b ec .$S./b{)....(.. 00:22:29.485 00000030 f6 9b 64 e6 07 45 03 80 23 07 13 4a 60 5e 15 e0 ..d..E..#..J`^.. 00:22:29.485 00000040 b4 08 5a f3 9a 4c 3c 9e 8a ee e1 6d 47 17 5f 84 ..Z..L<....mG._. 00:22:29.485 00000050 59 92 60 39 72 ea 95 e6 eb 14 e2 e4 f5 76 b1 8e Y.`9r........v.. 00:22:29.485 00000060 df c0 e7 19 5f f6 ba e7 02 60 0d ad e8 82 0b 32 ...._....`.....2 00:22:29.485 00000070 9a d6 13 39 cf 17 e6 c8 19 04 08 8a 94 39 55 be ...9.........9U. 00:22:29.485 00000080 37 af 92 06 f0 e0 a3 d0 c7 02 bb f4 24 16 04 59 7...........$..Y 00:22:29.485 00000090 11 be 61 c9 ee 7d 3e b8 5b 95 4b 75 4e cb fb bd ..a..}>.[.KuN... 00:22:29.485 000000a0 77 68 02 88 7a 62 29 56 ce ed 78 53 d4 5a 81 e5 wh..zb)V..xS.Z.. 00:22:29.485 000000b0 c0 3e b2 e6 68 d3 fd bf 71 93 e0 5e c6 9f 40 2a .>..h...q..^..@* 00:22:29.485 000000c0 64 ec d2 ec 2f 21 0d 7c cf 36 70 cc 53 9f 8a b7 d.../!.|.6p.S... 00:22:29.485 000000d0 63 42 08 d4 ce ee b6 72 54 af fc 1f 5f 0a e2 b7 cB.....rT..._... 00:22:29.485 000000e0 a1 d9 f4 4c 50 3e 6c 8b 9c b4 a3 98 d8 57 4a 95 ...LP>l......WJ. 00:22:29.485 000000f0 36 d5 2c 35 83 f6 02 4c aa d8 7f e3 bc 94 1c b3 6.,5...L........ 00:22:29.485 dh secret: 00:22:29.485 00000000 06 cb 86 3b 65 65 bd f7 fd d3 f0 e4 b4 2e 3a 1f ...;ee........:. 00:22:29.485 00000010 ca 4e 00 93 45 c2 79 3e c7 7e 1e 75 1b 5b ce 5f .N..E.y>.~.u.[._ 00:22:29.485 00000020 63 56 2b cb 4b d4 11 4c f8 f3 81 fc 20 fe 92 ea cV+.K..L.... ... 00:22:29.485 00000030 e9 48 0e 7d 4b 44 4c 31 e6 cc ed 30 76 d4 9a 4a .H.}KDL1...0v..J 00:22:29.485 00000040 b8 42 0b 71 b8 52 63 80 57 83 8e 3f 7b eb 12 22 .B.q.Rc.W..?{.." 00:22:29.485 00000050 83 58 ff 7b 02 a8 7f f0 b4 94 84 c8 99 87 0e a5 .X.{............ 00:22:29.485 00000060 5e 29 ed 1f 26 d3 89 ba de af 38 13 fc 8b 9d 18 ^)..&.....8..... 00:22:29.485 00000070 bd 14 66 e0 d5 9d d8 b3 31 d8 e4 f8 dd 36 ad df ..f.....1....6.. 00:22:29.485 00000080 b4 f3 87 c2 33 0b d8 3d b8 03 9b 87 64 22 6d 12 ....3..=....d"m. 00:22:29.485 00000090 6a 7c 6b 52 7e af cb 84 7a 01 3a 3b f2 27 69 fe j|kR~...z.:;.'i. 00:22:29.485 000000a0 e2 f2 39 c5 81 0a ca 59 b3 41 d9 58 1d 19 61 55 ..9....Y.A.X..aU 00:22:29.485 000000b0 11 f0 25 00 ba a2 4b b2 b0 b3 10 4a a8 25 b5 66 ..%...K....J.%.f 00:22:29.485 000000c0 b0 8f d5 34 64 31 2b 20 ee f0 51 9a b2 00 a6 df ...4d1+ ..Q..... 00:22:29.485 000000d0 79 f3 e1 74 78 d6 41 20 0f 42 26 10 53 fe 76 07 y..tx.A .B&.S.v. 00:22:29.485 000000e0 d8 97 3a f9 39 cf 79 ae c8 b8 50 ae 39 e2 38 b0 ..:.9.y...P.9.8. 00:22:29.485 000000f0 44 72 a7 03 f6 5c 97 02 0c 45 7f 47 92 21 ac a4 Dr...\...E.G.!.. 00:22:29.485 [2024-09-27 13:27:12.564078] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=2, dhgroup=1, seq=3775755227, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.485 [2024-09-27 13:27:12.564361] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.485 [2024-09-27 13:27:12.568118] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.485 [2024-09-27 13:27:12.568435] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.485 [2024-09-27 13:27:12.568641] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.485 [2024-09-27 13:27:12.568818] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.485 [2024-09-27 13:27:12.620455] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.485 [2024-09-27 13:27:12.620720] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.485 [2024-09-27 13:27:12.620842] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 1 (ffdhe2048) 00:22:29.485 [2024-09-27 13:27:12.621003] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.485 [2024-09-27 13:27:12.621212] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.485 ctrlr pubkey: 00:22:29.485 00000000 15 8c e0 74 03 83 d6 61 14 90 1f a8 c0 d6 8c 7c ...t...a.......| 00:22:29.485 00000010 d3 0d bc 04 47 bd cd ee d5 93 08 d2 6d 46 f9 2b ....G.......mF.+ 00:22:29.485 00000020 07 49 40 f8 f2 64 78 0a 31 3c 0d 13 cc 2a c5 21 .I@..dx.1<...*.! 00:22:29.485 00000030 24 fe 0f 18 04 06 15 e5 1c c6 47 e4 6b cf 73 cd $.........G.k.s. 00:22:29.485 00000040 b7 cc d7 bc 32 f8 d1 bf c2 7b 3b 57 8a bf 96 48 ....2....{;W...H 00:22:29.485 00000050 eb 7b a1 50 ac 44 b8 63 75 bf ca 5f f3 a9 37 44 .{.P.D.cu.._..7D 00:22:29.485 00000060 96 d8 fd 80 06 22 ae 79 ac 94 9a c7 db 98 5d e0 .....".y......]. 00:22:29.485 00000070 01 59 56 02 6c fc b1 e9 85 d3 35 5f d3 36 e7 f4 .YV.l.....5_.6.. 00:22:29.485 00000080 f5 62 df 16 16 48 c6 2e dc 00 8f b7 06 97 9d ec .b...H.......... 00:22:29.485 00000090 5b 2b e1 4e 7f e3 19 fe 71 ac 9d 19 31 4f cb c5 [+.N....q...1O.. 00:22:29.485 000000a0 c3 74 96 74 c0 45 38 65 f2 92 ed ae 54 35 79 27 .t.t.E8e....T5y' 00:22:29.485 000000b0 af 49 4f 9e 11 11 c6 5f 69 ae a4 da 22 88 1d 81 .IO...._i..."... 00:22:29.485 000000c0 1f f4 9f 91 35 76 55 a4 34 da ad 67 f9 0f 28 18 ....5vU.4..g..(. 00:22:29.485 000000d0 6d 07 af 7a d3 8c f0 bd 43 d2 23 92 df f1 69 ec m..z....C.#...i. 00:22:29.485 000000e0 31 5a 90 54 99 70 ba 55 56 d0 c8 a6 8a 62 72 cb 1Z.T.p.UV....br. 00:22:29.485 000000f0 62 6a c2 e6 e5 30 ac 6b 9b 57 c0 55 6c e2 a6 57 bj...0.k.W.Ul..W 00:22:29.485 host pubkey: 00:22:29.485 00000000 90 a6 0f e4 2d 1f f3 98 a4 60 a4 97 38 31 9f 92 ....-....`..81.. 00:22:29.485 00000010 4f 07 bd 08 eb 79 01 93 0f 6c 74 a2 29 6d b8 17 O....y...lt.)m.. 00:22:29.485 00000020 25 6f e7 06 c1 12 36 7a ac 1d 5f 80 30 d3 9f dc %o....6z.._.0... 00:22:29.485 00000030 2d 48 f8 76 e8 42 2b c3 90 69 63 06 62 28 fc fd -H.v.B+..ic.b(.. 00:22:29.485 00000040 06 48 c0 b5 6b f8 7c 11 91 c8 48 22 07 89 27 cb .H..k.|...H"..'. 00:22:29.485 00000050 43 8b 2a d2 30 45 d7 a9 81 5e f2 5c cd 57 0e af C.*.0E...^.\.W.. 00:22:29.485 00000060 0e 67 04 3e 39 11 14 ba d7 43 39 51 08 9f 8b e6 .g.>9....C9Q.... 00:22:29.485 00000070 d0 27 d6 60 5d d8 a3 ff 87 80 f5 fa bc 88 8b e2 .'.`]........... 00:22:29.485 00000080 9b 9e d2 66 81 d0 33 b4 5e 67 8d dd a5 a2 be a9 ...f..3.^g...... 00:22:29.485 00000090 32 6a 31 49 8d 18 58 10 06 03 5c 8f 0e a5 3a 0f 2j1I..X...\...:. 00:22:29.485 000000a0 72 12 8c e4 ee ed 68 97 03 5c a5 cc 2d c5 6a 1c r.....h..\..-.j. 00:22:29.485 000000b0 85 37 54 9c a6 b5 67 a1 33 40 97 80 0e f2 75 b6 .7T...g.3@....u. 00:22:29.485 000000c0 da 0b c5 69 07 55 71 2e 0a 01 7f 2c 26 1f b5 28 ...i.Uq....,&..( 00:22:29.485 000000d0 eb bd 84 ce 33 80 34 af 45 4a 1d 92 fd 80 0d cb ....3.4.EJ...... 00:22:29.485 000000e0 3d 57 e1 f9 b8 64 ce 45 23 b5 fd 05 e1 cf c6 6d =W...d.E#......m 00:22:29.485 000000f0 b9 68 bb 7d e0 1e e6 21 3d 53 12 ea 9c 89 7b d3 .h.}...!=S....{. 00:22:29.485 dh secret: 00:22:29.485 00000000 57 11 58 b6 2b 33 97 72 45 cf 35 de 08 58 58 10 W.X.+3.rE.5..XX. 00:22:29.485 00000010 82 a4 d1 1b 65 16 27 bd 05 21 1e ad ed 60 ca f5 ....e.'..!...`.. 00:22:29.485 00000020 d2 f0 b4 52 83 5b 4c 1c 96 8e a4 98 1b 25 ab ff ...R.[L......%.. 00:22:29.485 00000030 18 f3 f9 8f c1 d7 f2 76 93 7e 97 96 cf 90 67 95 .......v.~....g. 00:22:29.485 00000040 cf c7 3f 49 46 8a 99 67 9f c2 75 fa 65 db 4f 46 ..?IF..g..u.e.OF 00:22:29.485 00000050 da 85 74 f0 45 4b ce 28 eb 80 b5 3f 49 62 37 93 ..t.EK.(...?Ib7. 00:22:29.485 00000060 86 f7 c4 ab cf b1 16 e1 db 02 c6 5c f8 16 aa 5c ...........\...\ 00:22:29.485 00000070 d6 b0 c7 c8 dc 66 43 2e 14 e5 c0 7f f2 17 df 11 .....fC......... 00:22:29.485 00000080 19 6f 58 14 94 87 c2 a6 bb c2 59 fd e9 35 12 11 .oX.......Y..5.. 00:22:29.485 00000090 3d 28 45 7b 01 0c 41 fc 6b 9f 2a cc 71 d7 f2 2b =(E{..A.k.*.q..+ 00:22:29.485 000000a0 b7 4b 84 60 05 f2 0c e0 1d 12 ae 75 6d 43 ed 74 .K.`.......umC.t 00:22:29.485 000000b0 5d cb 38 08 fb 66 93 ef c8 4b e1 56 f9 b6 97 e2 ].8..f...K.V.... 00:22:29.485 000000c0 16 c0 56 9c 2b 27 b6 79 d4 09 ea fc cd 90 6f 7b ..V.+'.y......o{ 00:22:29.485 000000d0 c7 13 31 95 2a 41 21 67 0a a8 f8 2b 31 c1 0d b7 ..1.*A!g...+1... 00:22:29.485 000000e0 14 70 9c 3d 1f 5f 5f 17 2f cf 2f bb 4e e1 38 e7 .p.=.__././.N.8. 00:22:29.485 000000f0 89 d2 02 18 be f8 6c 9b 96 2d 99 99 eb 69 7e 0f ......l..-...i~. 00:22:29.486 [2024-09-27 13:27:12.627063] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key2, hash=2, dhgroup=1, seq=3775755228, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.486 [2024-09-27 13:27:12.627368] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.486 [2024-09-27 13:27:12.631150] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.486 [2024-09-27 13:27:12.631434] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.486 [2024-09-27 13:27:12.631646] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.486 [2024-09-27 13:27:12.631862] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.486 [2024-09-27 13:27:12.734882] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.486 [2024-09-27 13:27:12.735084] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.486 [2024-09-27 13:27:12.735204] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.486 [2024-09-27 13:27:12.735333] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.486 [2024-09-27 13:27:12.735536] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.486 ctrlr pubkey: 00:22:29.486 00000000 6d 3f 2a 12 0e 20 97 2f 20 3e f1 22 e4 15 95 43 m?*.. ./ >."...C 00:22:29.486 00000010 57 bd 4a 4c 07 8d 57 21 6c 88 28 f5 2a 8f d0 7b W.JL..W!l.(.*..{ 00:22:29.486 00000020 2c 7a 80 4d 4b d5 d3 38 bd 6a 01 b4 98 5e 4c 44 ,z.MK..8.j...^LD 00:22:29.486 00000030 47 23 21 dd f6 1f 6c f0 01 90 7c eb 88 b2 b9 34 G#!...l...|....4 00:22:29.486 00000040 d2 0c 16 0d 4d f0 42 fa 7b dc e9 95 cb 8a f7 4d ....M.B.{......M 00:22:29.486 00000050 1b 90 80 11 5b ea 07 60 2b 2c b0 78 8e f9 64 8f ....[..`+,.x..d. 00:22:29.486 00000060 9f 6f 4b 6c d2 5f 1c ba f0 c9 27 b5 20 a2 ce 57 .oKl._....'. ..W 00:22:29.486 00000070 5f 3f b1 c4 dd c4 10 66 70 c5 00 97 26 92 6c b2 _?.....fp...&.l. 00:22:29.486 00000080 e9 88 cc 21 6e ba 35 3c 46 26 59 32 6a d8 82 dd ...!n.5.....S3.... 00:22:29.486 000000b0 4c 69 72 71 9e a3 e4 47 a1 a6 dd e2 18 b1 19 99 Lirq...G........ 00:22:29.486 000000c0 24 b7 46 02 8b 52 9b 6a 43 51 bf 99 a9 e4 34 15 $.F..R.jCQ....4. 00:22:29.486 000000d0 a5 89 7f 37 f7 0f a6 7c 14 82 99 dc 92 c6 5e c3 ...7...|......^. 00:22:29.486 000000e0 6a 70 5a e0 a1 be 80 62 67 b4 85 d4 b7 23 38 77 jpZ....bg....#8w 00:22:29.486 000000f0 44 9c 8f 7e 9c 88 57 21 41 18 5e 74 84 7f 5e 3d D..~..W!A.^t..^= 00:22:29.486 host pubkey: 00:22:29.486 00000000 f8 5f 3a d3 ac ce 87 43 b5 07 b7 ad 08 b2 59 59 ._:....C......YY 00:22:29.486 00000010 5e 59 20 5b 06 cb 3f 58 20 68 d9 1e ea 14 ef 8f ^Y [..?X h...... 00:22:29.486 00000020 04 32 e4 c9 7b 33 c0 fa 1d 2b 96 39 c6 fe 64 e0 .2..{3...+.9..d. 00:22:29.486 00000030 dd 25 7f 96 29 fe 35 00 7e d2 ee 27 8b 0d f0 01 .%..).5.~..'.... 00:22:29.486 00000040 f5 2b f5 ec ee 82 f0 57 13 41 59 9c c7 6b 72 eb .+.....W.AY..kr. 00:22:29.486 00000050 68 6c c0 d6 07 42 b9 f8 0a ae 9e 5d 2c 1c 96 61 hl...B.....],..a 00:22:29.486 00000060 c1 73 98 6a 63 2b a9 bf 04 ef 59 18 59 ed 04 e1 .s.jc+....Y.Y... 00:22:29.486 00000070 c4 67 17 37 ee c6 8e d1 f0 78 29 e8 97 00 53 72 .g.7.....x)...Sr 00:22:29.486 00000080 84 4e 81 ca 97 b4 0c a6 56 ee 6d fe 91 fb fd 19 .N......V.m..... 00:22:29.486 00000090 15 5c cd 7a e0 c0 ee 23 71 90 9c 22 f6 38 3a 69 .\.z...#q..".8:i 00:22:29.486 000000a0 be aa 5d fe 27 5d 1b bb df 62 ed 03 a7 c8 8e dd ..].']...b...... 00:22:29.486 000000b0 dc 4a 3b ed 4a 9a b9 b0 b2 f1 44 45 c1 60 5e a4 .J;.J.....DE.`^. 00:22:29.486 000000c0 48 64 f7 0b 6f 6a b5 87 d0 e1 c1 3e 83 ea 67 33 Hd..oj.....>..g3 00:22:29.486 000000d0 ef ac 0a 63 65 35 24 b4 9f 15 b1 8c ca 95 89 d1 ...ce5$......... 00:22:29.486 000000e0 e3 08 64 b8 93 2e 84 f9 5e a5 7f 82 09 67 aa ea ..d.....^....g.. 00:22:29.486 000000f0 3e 03 1b c0 83 31 aa 04 7c 27 dc b6 63 96 d8 9b >....1..|'..c... 00:22:29.486 dh secret: 00:22:29.486 00000000 0a 64 88 e2 1c 52 d0 fb ee a2 8f a1 9c a7 3f 1d .d...R........?. 00:22:29.486 00000010 05 e4 70 60 a0 5e 43 9b 98 7a 01 87 9d a0 9e fe ..p`.^C..z...... 00:22:29.486 00000020 ae 20 7d c7 2f 7f e2 3c 34 d7 c5 57 93 17 08 9e . }./..<4..W.... 00:22:29.486 00000030 48 d2 1b 9b a4 d7 8b db cb ef ba ae b3 2b 59 56 H............+YV 00:22:29.486 00000040 01 85 90 42 45 56 b1 4d 06 d4 bf 55 a6 c4 32 29 ...BEV.M...U..2) 00:22:29.486 00000050 3e 4e 26 ab 49 66 64 ea a0 36 be 37 b2 34 cf 43 >N&.Ifd..6.7.4.C 00:22:29.486 00000060 b9 58 e1 02 29 e5 65 fe 80 66 c8 ce 9f 42 53 bf .X..).e..f...BS. 00:22:29.486 00000070 24 8e 49 0d a0 35 d4 b8 2f 83 b3 f0 72 37 b6 50 $.I..5../...r7.P 00:22:29.486 00000080 89 d8 7a de 83 1c d1 89 29 b7 b9 ea 99 03 6f 1a ..z.....).....o. 00:22:29.486 00000090 f3 d8 49 a0 df 64 bc 02 40 ea a9 e5 75 c0 8c a1 ..I..d..@...u... 00:22:29.486 000000a0 84 7d 1d eb 80 d5 bc f0 bc 7b a4 ae d0 da df f5 .}.......{...... 00:22:29.486 000000b0 66 b1 1d d1 54 d8 b2 69 13 22 e7 94 86 a0 be 7b f...T..i.".....{ 00:22:29.486 000000c0 c9 e6 04 00 00 78 62 a6 ac 57 7f 9d 2e 45 40 60 .....xb..W...E@` 00:22:29.486 000000d0 1b 92 a7 ef a5 80 ef 41 6b 24 dd 0a 7b 77 16 d9 .......Ak$..{w.. 00:22:29.486 000000e0 13 78 41 f6 18 af e9 cc 3a f3 64 c6 27 0f de 6b .xA.....:.d.'..k 00:22:29.486 000000f0 2b a4 36 2c 29 05 b5 6b 7a 51 42 47 a1 a5 86 84 +.6,)..kzQBG.... 00:22:29.486 [2024-09-27 13:27:12.741559] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key3, hash=2, dhgroup=1, seq=3775755229, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.486 [2024-09-27 13:27:12.741834] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.486 [2024-09-27 13:27:12.745883] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.486 [2024-09-27 13:27:12.746155] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.486 [2024-09-27 13:27:12.746350] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.486 [2024-09-27 13:27:12.746516] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.486 [2024-09-27 13:27:12.798097] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.486 [2024-09-27 13:27:12.798333] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.486 [2024-09-27 13:27:12.798429] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 1 (ffdhe2048) 00:22:29.486 [2024-09-27 13:27:12.798563] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.486 [2024-09-27 13:27:12.798812] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.486 ctrlr pubkey: 00:22:29.486 00000000 6d 3f 2a 12 0e 20 97 2f 20 3e f1 22 e4 15 95 43 m?*.. ./ >."...C 00:22:29.486 00000010 57 bd 4a 4c 07 8d 57 21 6c 88 28 f5 2a 8f d0 7b W.JL..W!l.(.*..{ 00:22:29.486 00000020 2c 7a 80 4d 4b d5 d3 38 bd 6a 01 b4 98 5e 4c 44 ,z.MK..8.j...^LD 00:22:29.486 00000030 47 23 21 dd f6 1f 6c f0 01 90 7c eb 88 b2 b9 34 G#!...l...|....4 00:22:29.486 00000040 d2 0c 16 0d 4d f0 42 fa 7b dc e9 95 cb 8a f7 4d ....M.B.{......M 00:22:29.486 00000050 1b 90 80 11 5b ea 07 60 2b 2c b0 78 8e f9 64 8f ....[..`+,.x..d. 00:22:29.486 00000060 9f 6f 4b 6c d2 5f 1c ba f0 c9 27 b5 20 a2 ce 57 .oKl._....'. ..W 00:22:29.486 00000070 5f 3f b1 c4 dd c4 10 66 70 c5 00 97 26 92 6c b2 _?.....fp...&.l. 00:22:29.486 00000080 e9 88 cc 21 6e ba 35 3c 46 26 59 32 6a d8 82 dd ...!n.5.....S3.... 00:22:29.486 000000b0 4c 69 72 71 9e a3 e4 47 a1 a6 dd e2 18 b1 19 99 Lirq...G........ 00:22:29.486 000000c0 24 b7 46 02 8b 52 9b 6a 43 51 bf 99 a9 e4 34 15 $.F..R.jCQ....4. 00:22:29.486 000000d0 a5 89 7f 37 f7 0f a6 7c 14 82 99 dc 92 c6 5e c3 ...7...|......^. 00:22:29.486 000000e0 6a 70 5a e0 a1 be 80 62 67 b4 85 d4 b7 23 38 77 jpZ....bg....#8w 00:22:29.486 000000f0 44 9c 8f 7e 9c 88 57 21 41 18 5e 74 84 7f 5e 3d D..~..W!A.^t..^= 00:22:29.486 host pubkey: 00:22:29.486 00000000 40 85 7f 60 55 ee 76 01 d3 f5 20 26 e5 49 01 eb @..`U.v... &.I.. 00:22:29.486 00000010 7d b8 31 3b b0 4a 5c 0a 15 32 aa d9 de 30 78 d8 }.1;.J\..2...0x. 00:22:29.486 00000020 e6 2d c1 6c c1 d0 61 f1 69 76 e8 ec 0b 04 89 40 .-.l..a.iv.....@ 00:22:29.486 00000030 48 0e 47 38 96 98 73 55 23 01 89 ea d4 05 c6 63 H.G8..sU#......c 00:22:29.486 00000040 64 f4 68 e6 aa e7 27 99 2f 34 5a 65 ec 34 6d 3f d.h...'./4Ze.4m? 00:22:29.486 00000050 14 77 5d 02 f4 0f c4 0a ae ae 81 f2 24 63 f5 e7 .w].........$c.. 00:22:29.486 00000060 fd 4f ae e6 65 6e 80 16 08 e5 35 86 ae b4 cf b4 .O..en....5..... 00:22:29.486 00000070 30 1b 07 68 e9 be 35 6c cd 12 4f 77 f4 48 20 58 0..h..5l..Ow.H X 00:22:29.486 00000080 21 c5 1f 51 00 f0 46 b0 cf 39 6c 5b d3 b6 a1 3b !..Q..F..9l[...; 00:22:29.486 00000090 a3 09 d1 b2 f8 b2 85 d3 cb 46 53 66 cc 9b 6b 68 .........FSf..kh 00:22:29.486 000000a0 b5 30 ae 16 12 01 eb 01 cb b7 8c 50 63 66 c9 ef .0.........Pcf.. 00:22:29.486 000000b0 89 e9 72 a5 a2 95 ea 60 72 c7 cd 14 18 86 1c fb ..r....`r....... 00:22:29.486 000000c0 d0 62 65 c0 fd 74 a5 4e 7c 2c 05 2e 0e 21 f3 82 .be..t.N|,...!.. 00:22:29.486 000000d0 2e 8b b7 f5 65 32 c6 a1 6c 97 35 b4 3c 34 0c 6d ....e2..l.5.<4.m 00:22:29.486 000000e0 9e e7 9b 86 ca 45 c9 11 f3 04 de 5c 18 17 35 28 .....E.....\..5( 00:22:29.486 000000f0 9d 44 ef 35 e9 cb 74 33 01 70 3b 25 c7 94 69 21 .D.5..t3.p;%..i! 00:22:29.486 dh secret: 00:22:29.486 00000000 03 74 0b 6c 55 b0 96 fc d8 33 8d 45 07 bd 93 4a .t.lU....3.E...J 00:22:29.486 00000010 49 81 1d 11 ed 00 09 2b a7 fd c0 0b 60 1a 96 2d I......+....`..- 00:22:29.486 00000020 ae 78 0b cd 61 37 0a ce 2d 79 74 2f f6 ce a5 e2 .x..a7..-yt/.... 00:22:29.486 00000030 d5 58 bf 5e 64 b6 24 8d 69 1c 64 03 39 c4 87 f8 .X.^d.$.i.d.9... 00:22:29.486 00000040 46 77 84 9b c7 17 31 7a e9 50 1d 70 37 0d 9d 38 Fw....1z.P.p7..8 00:22:29.486 00000050 a6 e9 b4 79 21 57 0e 18 e1 46 2d 8a 3d 25 da a1 ...y!W...F-.=%.. 00:22:29.486 00000060 d2 9c 22 83 ac 9c b9 82 f6 93 3c 42 54 cb 34 df .."............n...A..w 00:22:29.486 [2024-09-27 13:27:12.804614] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key3, hash=2, dhgroup=1, seq=3775755230, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.487 [2024-09-27 13:27:12.804921] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.487 [2024-09-27 13:27:12.808815] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.487 [2024-09-27 13:27:12.809091] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.487 [2024-09-27 13:27:12.809274] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.487 [2024-09-27 13:27:12.809454] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.487 [2024-09-27 13:27:12.900564] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.487 [2024-09-27 13:27:12.900828] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.487 [2024-09-27 13:27:12.901004] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.487 [2024-09-27 13:27:12.901127] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.487 [2024-09-27 13:27:12.901320] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.487 ctrlr pubkey: 00:22:29.487 00000000 6b 93 96 e4 b9 95 67 33 62 3b 4f fc c6 9f 72 ae k.....g3b;O...r. 00:22:29.487 00000010 05 7b b5 4e 17 c7 d3 1a fc 99 9c a5 8e 1e 2e d3 .{.N............ 00:22:29.487 00000020 28 e5 85 68 fb 94 74 58 64 0f c5 83 05 a7 ed 5d (..h..tXd......] 00:22:29.487 00000030 79 a2 f3 e3 ed 71 c7 ef 56 ca 5b da 1d a1 3c e7 y....q..V.[...<. 00:22:29.487 00000040 f8 18 97 cd b3 8d aa 8d 9f 69 ad 17 9b 13 f1 89 .........i...... 00:22:29.487 00000050 d2 d0 d7 14 2a ac 77 13 ff 1a 3b cc 51 2f 9b 16 ....*.w...;.Q/.. 00:22:29.487 00000060 a2 c2 4a 6a 76 cf 01 e6 47 bd ca 50 55 f8 0f 5f ..Jjv...G..PU.._ 00:22:29.487 00000070 f3 93 e4 ee e9 95 45 6c 0d 8a 48 b9 b9 e1 ad 03 ......El..H..... 00:22:29.487 00000080 02 bc 55 6f c6 78 e4 dc 31 8b 83 7d a1 b8 e6 a6 ..Uo.x..1..}.... 00:22:29.487 00000090 37 0a ac bc 2a 65 52 cb 8e c3 ee f0 16 03 b4 21 7...*eR........! 00:22:29.487 000000a0 18 49 d4 92 9c a0 2f e5 54 8e 45 2a f4 fd 9c 14 .I..../.T.E*.... 00:22:29.487 000000b0 aa 19 76 23 c2 02 8d 76 d3 df 03 b6 66 bc c6 f8 ..v#...v....f... 00:22:29.487 000000c0 dd b7 08 00 62 f0 52 c2 7d 5c 7c d4 8a 02 87 34 ....b.R.}\|....4 00:22:29.487 000000d0 33 d4 14 34 a0 25 60 bd 8b 02 64 e7 e7 37 c9 72 3..4.%`...d..7.r 00:22:29.487 000000e0 35 60 e3 59 9a 53 e0 d6 f4 d0 f4 f8 23 9c 23 2e 5`.Y.S......#.#. 00:22:29.487 000000f0 97 5d 25 34 fe 3e 43 4a 16 7e 1e d9 ed ae 54 a4 .]%4.>CJ.~....T. 00:22:29.487 host pubkey: 00:22:29.487 00000000 fd 42 7c 01 18 77 75 bf 12 7c 84 5d df 77 51 21 .B|..wu..|.].wQ! 00:22:29.487 00000010 3d 22 dd 38 4f a0 a7 f3 1d ff 75 bf 93 c4 b0 57 =".8O.....u....W 00:22:29.487 00000020 11 9a 16 a0 17 ca 40 6a 85 12 99 1e 0c 01 a8 e3 ......@j........ 00:22:29.487 00000030 d0 c7 79 f9 e7 18 de a5 8d bb fc 3c 47 08 c8 47 ..y........CJ.~....T. 00:22:29.487 host pubkey: 00:22:29.487 00000000 0f e7 96 68 ff 10 5c 10 4d 50 10 92 83 76 ed 59 ...h..\.MP...v.Y 00:22:29.487 00000010 64 e7 0a b3 ea b1 d2 f5 38 1a 72 84 b2 e7 41 12 d.......8.r...A. 00:22:29.487 00000020 c6 61 89 2d f2 29 35 36 53 29 4d 35 c4 d1 67 66 .a.-.)56S)M5..gf 00:22:29.487 00000030 9b 49 5e 38 fd ce 93 71 76 41 7f 2e e3 1f 93 8f .I^8...qvA...... 00:22:29.487 00000040 7d 1f 50 3e 88 b6 91 41 1f 20 11 e2 7a b3 5f 56 }.P>...A. ..z._V 00:22:29.487 00000050 f2 d1 02 91 00 71 5f 81 42 f6 f5 22 84 ae aa d0 .....q_.B..".... 00:22:29.487 00000060 13 e9 60 e1 3c 58 39 86 e3 1c 3d a1 e8 77 f0 68 ..`....T../j 00:22:29.487 00000080 0c 70 7d 31 ec 0c da 87 3f 39 a0 d6 48 b0 bf ad .p}1....?9..H... 00:22:29.487 00000090 3f f2 01 69 1e a0 77 bb e9 22 47 df 63 e7 a2 ee ?..i..w.."G.c... 00:22:29.487 000000a0 16 df fc 70 dc d6 13 35 cb 2f 0c 59 bf 7b c1 5a ...p...5./.Y.{.Z 00:22:29.487 000000b0 98 d3 d1 8e 9b e8 37 da 8e af a1 96 57 73 36 3e ......7.....Ws6> 00:22:29.487 000000c0 b7 74 11 3c d0 8d f4 df 8b de 60 66 27 66 a6 89 .t.<......`f'f.. 00:22:29.487 000000d0 ab 8c 4d 76 0b f6 ef 6d fd 32 8e fd 2b 50 aa 8e ..Mv...m.2..+P.. 00:22:29.487 000000e0 1d c6 f0 35 6b 2c 52 90 88 96 ef e8 18 23 b2 c0 ...5k,R......#.. 00:22:29.487 000000f0 26 85 d5 1f b7 03 ed d9 af 77 3a 8f 92 83 41 d1 &........w:...A. 00:22:29.487 dh secret: 00:22:29.487 00000000 6f 28 7e a9 9c ee 3c d5 da 51 d6 de 7b 9d 17 af o(~...<..Q..{... 00:22:29.487 00000010 c9 aa 17 e6 95 66 54 e4 e0 8d 9d 06 f6 98 73 9f .....fT.......s. 00:22:29.487 00000020 90 aa c3 ec 6a 71 ea a1 85 89 10 d4 0f 7e 34 1e ....jq.......~4. 00:22:29.487 00000030 bb 95 99 e7 36 e3 f7 fc 6e 2f ad cd 0e 1f 7d 33 ....6...n/....}3 00:22:29.487 00000040 80 f8 5d fa 54 5f 7d 08 46 3a 4e f3 cd 65 79 27 ..].T_}.F:N..ey' 00:22:29.487 00000050 40 86 75 68 f1 95 98 04 be b5 73 62 9d a3 4a d1 @.uh......sb..J. 00:22:29.487 00000060 19 49 99 8f d5 a8 6c 8e fa d3 08 a6 e3 32 00 c6 .I....l......2.. 00:22:29.487 00000070 70 3e b4 5e 3d ab 20 80 51 ba 9a 4a a9 72 21 75 p>.^=. .Q..J.r!u 00:22:29.487 00000080 fc 7c 76 24 65 25 85 d3 64 53 2a c1 0b e4 10 35 .|v$e%..dS*....5 00:22:29.487 00000090 35 5f 60 7b 38 3a 0e aa 16 b1 e8 ad ba ff fe a3 5_`{8:.......... 00:22:29.487 000000a0 ef 4b 6f e2 84 d2 96 13 40 4f 74 31 c7 99 61 fc .Ko.....@Ot1..a. 00:22:29.487 000000b0 42 bc 95 e7 72 27 9c 70 e3 d7 d6 7f 5b 9d 50 76 B...r'.p....[.Pv 00:22:29.487 000000c0 7d 06 5f 8a e4 be 2a 05 ba 4a b7 b1 45 8e a4 9b }._...*..J..E... 00:22:29.487 000000d0 4c 6d 1d 84 27 96 77 5a e5 cf aa 07 73 d7 45 d8 Lm..'.wZ....s.E. 00:22:29.487 000000e0 8d f8 d2 a5 df 94 5f 94 35 e7 02 c2 d4 52 6a 76 ......_.5....Rjv 00:22:29.487 000000f0 88 31 7e 07 4f 41 0c a3 55 01 af bf d3 90 cb ff .1~.OA..U....... 00:22:29.487 [2024-09-27 13:27:12.974188] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key4, hash=2, dhgroup=1, seq=3775755232, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.487 [2024-09-27 13:27:12.974458] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.487 [2024-09-27 13:27:12.978178] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.488 [2024-09-27 13:27:12.978571] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.488 [2024-09-27 13:27:12.978748] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.488 [2024-09-27 13:27:13.088669] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.488 [2024-09-27 13:27:13.088878] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.488 [2024-09-27 13:27:13.088977] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 2 (ffdhe3072) 00:22:29.488 [2024-09-27 13:27:13.089154] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.488 [2024-09-27 13:27:13.089355] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.488 ctrlr pubkey: 00:22:29.488 00000000 2f 73 7b 51 ac 31 a6 b1 68 a8 c3 3b 53 99 42 27 /s{Q.1..h..;S.B' 00:22:29.488 00000010 1a a5 4b 31 06 8a f2 26 ac c8 e4 8d 56 5c 9c b4 ..K1...&....V\.. 00:22:29.488 00000020 1b 07 5e 98 b9 85 0d 16 36 4b 0c 02 fc cf ea a6 ..^.....6K...... 00:22:29.488 00000030 38 0d bb 81 9b 66 5d 79 50 50 8d a3 13 9b f6 35 8....f]yPP.....5 00:22:29.488 00000040 6d d7 73 84 be 89 29 a6 29 83 71 50 2c 56 28 1d m.s...).).qP,V(. 00:22:29.488 00000050 82 ac 34 42 24 2c 72 7e 69 a9 93 7d f6 68 19 82 ..4B$,r~i..}.h.. 00:22:29.488 00000060 32 2b ad 13 64 ca 89 56 1c 89 67 c8 a8 6c 1c 3b 2+..d..V..g..l.; 00:22:29.488 00000070 62 59 36 3c 49 cd b0 31 58 00 9a a6 b5 f0 2a 40 bY6...R. 00:22:29.488 host pubkey: 00:22:29.488 00000000 f3 ac f1 25 fa da d5 7a ac 69 47 eb e5 79 ac ee ...%...z.iG..y.. 00:22:29.488 00000010 32 ec 2b bf 10 dc 04 d6 bc da d9 5e 38 1c 6a 68 2.+........^8.jh 00:22:29.488 00000020 b6 0c eb ca 33 ce 98 7c 8d 61 c1 ad 01 45 97 da ....3..|.a...E.. 00:22:29.488 00000030 de 9f 25 ae 19 d9 c1 65 9d 39 b7 b3 a7 99 67 7d ..%....e.9....g} 00:22:29.488 00000040 2b cd f7 21 0c 67 ba eb 3b 95 27 b3 d3 40 84 8a +..!.g..;.'..@.. 00:22:29.488 00000050 ce 75 f5 e8 24 97 bd af a1 99 9b f0 8c 6a 3d a6 .u..$........j=. 00:22:29.488 00000060 e3 a5 28 c5 8a 24 fb 74 60 ee 27 e0 f3 a6 84 31 ..(..$.t`.'....1 00:22:29.488 00000070 3d f8 70 88 d5 03 c2 38 e2 15 91 ad f3 0d 64 d2 =.p....8......d. 00:22:29.488 00000080 86 e9 05 fd 4e 61 56 a8 2b 5d d1 2c a3 5c f5 7c ....NaV.+].,.\.| 00:22:29.488 00000090 85 fe 17 8b b3 a6 08 a1 dc 49 d5 74 b8 1a 96 f3 .........I.t.... 00:22:29.488 000000a0 e5 d8 e2 ef db bf 4e 46 b5 ca a4 a4 67 b4 6e bb ......NF....g.n. 00:22:29.488 000000b0 ea ef d2 1d 03 81 c1 5c 26 1c 46 65 e1 7d b9 99 .......\&.Fe.}.. 00:22:29.488 000000c0 a0 59 fc 09 2b fe c4 b7 69 2a a2 a6 e7 56 f9 d9 .Y..+...i*...V.. 00:22:29.488 000000d0 ac e1 31 09 32 c3 8a 09 a6 b7 40 d2 9d 98 46 64 ..1.2.....@...Fd 00:22:29.488 000000e0 95 ce 2c 28 dd 6c 8b b5 eb 9d 4f 12 89 f9 4a 72 ..,(.l....O...Jr 00:22:29.488 000000f0 2f 49 67 39 73 72 6c 44 40 0a 1c 20 34 05 83 5e /Ig9srlD@.. 4..^ 00:22:29.488 00000100 35 db e5 61 09 61 ff 81 5e ae 88 74 a7 51 32 a7 5..a.a..^..t.Q2. 00:22:29.488 00000110 18 b1 49 96 a6 f0 ef 1c 7c 99 55 b3 c0 c0 17 4f ..I.....|.U....O 00:22:29.488 00000120 9a 96 d3 e7 0d 2b 0f eb ae a4 a6 3e 54 2e 66 ba .....+.....>T.f. 00:22:29.488 00000130 c1 c9 c3 f5 f9 9d 75 b8 78 37 43 4e 09 82 99 e8 ......u.x7CN.... 00:22:29.488 00000140 e7 a2 b3 be af ff 12 d0 79 bf dc 4b ed 57 d8 36 ........y..K.W.6 00:22:29.488 00000150 9f e2 18 d2 fb c5 a6 1e e7 61 00 46 a2 15 80 a1 .........a.F.... 00:22:29.488 00000160 79 df e2 ef d2 51 6a 6c 4f 71 3d 0f 17 87 1c f1 y....QjlOq=..... 00:22:29.488 00000170 c0 a4 70 8e 38 25 33 98 dd a7 b1 73 65 c4 8b 6b ..p.8%3....se..k 00:22:29.488 dh secret: 00:22:29.488 00000000 e3 f1 d0 72 6b 03 cd d6 78 b8 22 45 49 9c 43 4c ...rk...x."EI.CL 00:22:29.488 00000010 07 2a 58 c5 c4 09 5b 48 80 ae b9 80 ba ae 91 86 .*X...[H........ 00:22:29.488 00000020 cc 26 9b 6d b0 e8 f7 85 79 43 13 2b 08 25 3f bd .&.m....yC.+.%?. 00:22:29.488 00000030 d7 89 9e 70 3a 02 94 72 70 50 22 e7 96 f0 f9 73 ...p:..rpP"....s 00:22:29.488 00000040 e2 d6 ce 4b ea a7 10 04 90 b0 8c f1 02 9f 6b b6 ...K..........k. 00:22:29.488 00000050 1c d2 ce 23 d3 8e 64 8a 34 aa c3 76 12 1e 86 f5 ...#..d.4..v.... 00:22:29.488 00000060 57 b7 01 43 c1 09 57 ce f6 fc 47 4c 8e 05 c5 ff W..C..W...GL.... 00:22:29.488 00000070 d4 2f 44 7a 6c 2b a1 ef dd b9 ae 77 f0 9c 11 4a ./Dzl+.....w...J 00:22:29.488 00000080 f3 70 6b eb de b3 14 0e f5 f7 99 65 c0 84 4b 88 .pk........e..K. 00:22:29.488 00000090 14 a9 da 35 48 a7 c7 50 11 bc 1d 01 5f 91 2a ec ...5H..P...._.*. 00:22:29.488 000000a0 b2 e6 0c 80 e3 11 8f b2 ef 01 a8 87 04 65 26 60 .............e&` 00:22:29.488 000000b0 a5 32 b3 6d de d8 19 d1 e8 f2 1b c5 74 48 73 bc .2.m........tHs. 00:22:29.488 000000c0 c5 c1 28 8e d4 76 c8 32 bd a3 6f 29 96 2d a5 f5 ..(..v.2..o).-.. 00:22:29.488 000000d0 c4 d3 f0 7b 1b 7b 88 e2 ea ee 95 8b 1f dd 5f 55 ...{.{........_U 00:22:29.488 000000e0 ca 02 81 ca 5b c0 c0 e2 c4 53 50 43 86 74 0f 54 ....[....SPC.t.T 00:22:29.488 000000f0 f5 e6 07 70 f7 be a3 c7 77 99 90 85 95 61 29 c6 ...p....w....a). 00:22:29.488 00000100 62 60 78 a0 88 b3 11 6a 4a 6e d8 c0 b1 fc 46 87 b`x....jJn....F. 00:22:29.488 00000110 84 90 9d d1 54 3b da c1 59 19 68 ce 07 a2 c3 ff ....T;..Y.h..... 00:22:29.488 00000120 7d 45 df 8b 81 4c 24 cc d9 19 f0 2e e3 56 f9 1c }E...L$......V.. 00:22:29.488 00000130 87 cf fe 6f 77 43 fc 54 2f da 01 dd 4f aa b4 b8 ...owC.T/...O... 00:22:29.488 00000140 90 a4 cb 57 26 07 51 b1 0a 57 7e 5f 8c 78 7a f7 ...W&.Q..W~_.xz. 00:22:29.488 00000150 55 77 f9 82 2c 02 98 ea 5a 6f 14 84 81 fc c3 ea Uw..,...Zo...... 00:22:29.488 00000160 e3 1e 18 4d 0c 19 7c 51 de b1 d9 09 ba 9f 3c 5d ...M..|Q......<] 00:22:29.488 00000170 92 6b 43 c6 ae e0 7a c4 7d 4b 7c f3 64 8c f0 13 .kC...z.}K|.d... 00:22:29.488 [2024-09-27 13:27:13.102323] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key0, hash=2, dhgroup=2, seq=3775755233, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.488 [2024-09-27 13:27:13.102618] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.488 [2024-09-27 13:27:13.110318] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.488 [2024-09-27 13:27:13.110669] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.488 [2024-09-27 13:27:13.110885] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.488 [2024-09-27 13:27:13.111022] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.488 [2024-09-27 13:27:13.162931] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.488 [2024-09-27 13:27:13.163080] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.488 [2024-09-27 13:27:13.163224] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.488 [2024-09-27 13:27:13.163356] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.488 [2024-09-27 13:27:13.163536] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.488 ctrlr pubkey: 00:22:29.488 00000000 2f 73 7b 51 ac 31 a6 b1 68 a8 c3 3b 53 99 42 27 /s{Q.1..h..;S.B' 00:22:29.488 00000010 1a a5 4b 31 06 8a f2 26 ac c8 e4 8d 56 5c 9c b4 ..K1...&....V\.. 00:22:29.488 00000020 1b 07 5e 98 b9 85 0d 16 36 4b 0c 02 fc cf ea a6 ..^.....6K...... 00:22:29.488 00000030 38 0d bb 81 9b 66 5d 79 50 50 8d a3 13 9b f6 35 8....f]yPP.....5 00:22:29.488 00000040 6d d7 73 84 be 89 29 a6 29 83 71 50 2c 56 28 1d m.s...).).qP,V(. 00:22:29.488 00000050 82 ac 34 42 24 2c 72 7e 69 a9 93 7d f6 68 19 82 ..4B$,r~i..}.h.. 00:22:29.488 00000060 32 2b ad 13 64 ca 89 56 1c 89 67 c8 a8 6c 1c 3b 2+..d..V..g..l.; 00:22:29.488 00000070 62 59 36 3c 49 cd b0 31 58 00 9a a6 b5 f0 2a 40 bY6...R. 00:22:29.489 host pubkey: 00:22:29.489 00000000 9c c9 86 cd be db 6f 50 5c 1b 53 f1 a5 c7 cb 02 ......oP\.S..... 00:22:29.489 00000010 20 5c d5 90 d5 51 1c 39 b5 7b fc 1e 9b 4d 2b c2 \...Q.9.{...M+. 00:22:29.489 00000020 08 51 8a 0c 36 c1 1f a6 f8 ba da ff f0 5a 52 a4 .Q..6........ZR. 00:22:29.489 00000030 8e 76 07 d5 b9 4d ab 22 1e f6 1c 04 0a a9 25 7e .v...M."......%~ 00:22:29.489 00000040 55 52 50 8a 72 31 6f ba d4 bc 2f 3e 46 8c 89 e6 URP.r1o.../>F... 00:22:29.489 00000050 66 69 d5 85 c3 2c b8 0c c0 5a a7 67 4a 25 fb 73 fi...,...Z.gJ%.s 00:22:29.489 00000060 71 ab 54 7d 8b 73 80 ec d4 ef dc 47 73 1f 6f c1 q.T}.s.....Gs.o. 00:22:29.489 00000070 11 8c ab a4 b9 64 1b 0c 3e e1 44 bd d9 55 74 23 .....d..>.D..Ut# 00:22:29.489 00000080 7e d0 eb ec 5a 18 d6 17 01 03 39 4f e1 97 59 94 ~...Z.....9O..Y. 00:22:29.489 00000090 b1 03 f4 23 79 16 79 4c ae 8a 7e f0 4d 5b 87 73 ...#y.yL..~.M[.s 00:22:29.489 000000a0 9a cb 08 41 b6 9d ae 5a 19 40 c1 15 5b ba a7 f1 ...A...Z.@..[... 00:22:29.489 000000b0 f1 8b c0 a2 99 d3 e7 d1 48 cf 2f 99 6f ce db 6b ........H./.o..k 00:22:29.489 000000c0 07 e3 98 86 00 3b 31 3e 40 95 9a f7 41 df 88 fb .....;1>@...A... 00:22:29.489 000000d0 84 37 49 46 f3 30 7e b2 6e 44 1a a2 04 38 53 9d .7IF.0~.nD...8S. 00:22:29.489 000000e0 5a d2 a6 1e 13 3f 28 3a ae 4d 7b cc 71 82 41 1f Z....?(:.M{.q.A. 00:22:29.489 000000f0 57 01 bc a9 20 68 89 cb 96 89 61 87 e7 8a 3e ea W... h....a...>. 00:22:29.489 00000100 1f 7d 02 8c da 16 0a cf cb 95 bf 61 b4 2b 1d 55 .}.........a.+.U 00:22:29.489 00000110 28 d9 18 df 17 9d 86 e1 3a 18 67 14 f9 c1 52 56 (.......:.g...RV 00:22:29.489 00000120 7c 2a 86 45 dc a9 c7 e1 17 01 8d c9 57 43 04 d0 |*.E........WC.. 00:22:29.489 00000130 af b0 ec 7a 6c 98 6e ff 57 62 0b be c4 79 99 3e ...zl.n.Wb...y.> 00:22:29.489 00000140 63 de 17 56 25 76 40 fe 7e 45 88 0f 3d 7a 94 ae c..V%v@.~E..=z.. 00:22:29.489 00000150 f0 4b 04 e0 06 d3 8a 99 f0 fa 26 66 f3 e9 7b 1a .K........&f..{. 00:22:29.489 00000160 ba ec a1 f7 25 6b eb e1 e2 15 5f 1d cb 8a 61 0c ....%k...._...a. 00:22:29.489 00000170 42 2a 39 33 b1 5f 7b 9b 8b 0b 9d 35 62 98 25 84 B*93._{....5b.%. 00:22:29.489 dh secret: 00:22:29.489 00000000 1c e8 4d 51 78 5e 25 4c 22 56 0a 76 69 19 de ae ..MQx^%L"V.vi... 00:22:29.489 00000010 79 81 19 94 82 0f 6f 00 e7 d6 f7 03 31 47 19 02 y.....o.....1G.. 00:22:29.489 00000020 08 06 e4 08 be 10 13 32 0e 52 3d 9b 9b d4 96 e3 .......2.R=..... 00:22:29.489 00000030 65 d9 5d 67 a8 ce b3 90 8a bb a8 df 25 b3 5b 26 e.]g........%.[& 00:22:29.489 00000040 04 33 f2 e3 31 30 6b 02 1f 38 7f 76 da a2 b1 31 .3..10k..8.v...1 00:22:29.489 00000050 2b db 47 fa cd 29 cc e2 9c 21 f9 4c c8 09 13 6a +.G..)...!.L...j 00:22:29.489 00000060 9e fb 0c fb 98 e5 03 45 26 68 78 2c e2 17 4e 62 .......E&hx,..Nb 00:22:29.489 00000070 75 2d 1d 0f b4 2b 59 80 9f f0 2f 6d 50 fb d4 0e u-...+Y.../mP... 00:22:29.489 00000080 e3 79 b7 6c 54 66 31 a3 f3 fe 91 93 ce ea 65 b2 .y.lTf1.......e. 00:22:29.489 00000090 d4 94 64 80 8e 82 00 66 11 bb 8b 26 4c 53 10 3d ..d....f...&LS.= 00:22:29.489 000000a0 ef 06 b0 c6 eb f6 f0 90 24 69 f7 f6 ea b9 d8 f1 ........$i...... 00:22:29.489 000000b0 d0 b5 35 55 60 86 46 f9 63 6d 2c c2 8b 09 f2 e4 ..5U`.F.cm,..... 00:22:29.489 000000c0 6f 87 86 c7 94 e9 ad 5c 00 0e c4 72 fe 2e de 07 o......\...r.... 00:22:29.489 000000d0 bb 07 de ca 91 a6 c3 f8 ec 1f d5 5c 78 b4 4b 1e ...........\x.K. 00:22:29.489 000000e0 dd a3 7e 4e 41 d3 7f 2a 45 b3 74 57 e0 87 3e 78 ..~NA..*E.tW..>x 00:22:29.489 000000f0 32 92 ec 19 6f 5b d9 09 9f d8 c9 80 f9 21 51 59 2...o[.......!QY 00:22:29.489 00000100 6b ce 14 c8 ea 25 61 f4 43 b0 6d a8 aa 43 26 07 k....%a.C.m..C&. 00:22:29.489 00000110 11 48 84 13 09 99 9f 4f 44 fb 8a cb 29 f4 d1 6f .H.....OD...)..o 00:22:29.489 00000120 93 d7 82 e8 7e 7f d6 63 f3 e1 68 74 ec df 84 78 ....~..c..ht...x 00:22:29.489 00000130 2f fe 46 0b 9b 7a 4b 97 50 04 b1 ce ef 34 f4 65 /.F..zK.P....4.e 00:22:29.489 00000140 f4 48 d3 39 62 56 07 51 61 05 24 cb dc 20 d0 2f .H.9bV.Qa.$.. ./ 00:22:29.489 00000150 52 e2 75 f9 45 eb 2a 56 eb c0 17 88 fd dd bc 1d R.u.E.*V........ 00:22:29.489 00000160 97 ce ca a1 e5 55 f7 eb 8c 8c b0 36 84 d9 11 bd .....U.....6.... 00:22:29.489 00000170 bb 55 18 6c 77 0f 11 7d c1 2d 54 17 2d d3 6a 07 .U.lw..}.-T.-.j. 00:22:29.489 [2024-09-27 13:27:13.176982] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key0, hash=2, dhgroup=2, seq=3775755234, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.489 [2024-09-27 13:27:13.177328] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.489 [2024-09-27 13:27:13.185238] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.489 [2024-09-27 13:27:13.185666] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.489 [2024-09-27 13:27:13.185879] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.489 [2024-09-27 13:27:13.186059] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.489 [2024-09-27 13:27:13.287481] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.489 [2024-09-27 13:27:13.287650] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.489 [2024-09-27 13:27:13.287856] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 2 (ffdhe3072) 00:22:29.489 [2024-09-27 13:27:13.287955] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.489 [2024-09-27 13:27:13.288158] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.489 ctrlr pubkey: 00:22:29.489 00000000 2b 7e 58 7d cd 4f 71 51 24 3a 47 6a e8 58 ed 2e +~X}.OqQ$:Gj.X.. 00:22:29.489 00000010 1b bb c1 9e 31 f7 5f 5d 71 b1 5c 20 ad 7c d3 c1 ....1._]q.\ .|.. 00:22:29.489 00000020 db a8 5b 61 cd 93 d3 99 79 08 60 74 98 65 bc b8 ..[a....y.`t.e.. 00:22:29.489 00000030 4e 76 37 a3 2d 5f f4 ca 37 99 65 cd 52 49 ae a7 Nv7.-_..7.e.RI.. 00:22:29.489 00000040 38 5d 65 d2 fe 16 bd c0 26 7c f1 2a 8d 43 99 ad 8]e.....&|.*.C.. 00:22:29.489 00000050 a1 1c 6b 77 41 39 74 b3 d4 c3 ec 73 5a 63 d7 71 ..kwA9t....sZc.q 00:22:29.489 00000060 cf e0 a2 c1 9f ae 92 49 60 e3 05 85 c5 34 41 34 .......I`....4A4 00:22:29.489 00000070 4f 41 cf 75 4e 47 a3 c5 92 51 00 37 fa d2 65 49 OA.uNG...Q.7..eI 00:22:29.489 00000080 f2 c9 d4 3a 95 21 01 24 46 03 a2 45 27 09 87 ac ...:.!.$F..E'... 00:22:29.489 00000090 c5 08 41 87 39 6e e4 ee ce ea fe 24 98 83 35 42 ..A.9n.....$..5B 00:22:29.489 000000a0 cb 3f 53 fb 7d 13 d7 a6 28 00 3f ea 9f bd cd 7b .?S.}...(.?....{ 00:22:29.489 000000b0 cb 16 cb dd 0c 60 ef 40 f8 22 da 49 1a 04 e4 61 .....`.@.".I...a 00:22:29.489 000000c0 e2 cc 58 79 4e d2 7f c1 17 47 4f c6 2a 21 3f 46 ..XyN....GO.*!?F 00:22:29.489 000000d0 8c 48 84 3c 50 88 16 b7 fe 8f 28 ee 6b 60 a2 a6 .H.., 00:22:29.489 000000f0 7c 78 45 ac 70 4a df e6 7e fc 17 03 96 a5 39 70 |xE.pJ..~.....9p 00:22:29.489 00000100 b4 23 d9 25 83 0f 25 ef 23 bb da b7 c8 e9 62 12 .#.%..%.#.....b. 00:22:29.489 00000110 e7 b2 cb 4f ed 5d bf 48 cc 37 af eb 7b 7d 89 b7 ...O.].H.7..{}.. 00:22:29.489 00000120 d1 5a e1 eb 8c b9 45 be 4f f8 09 c0 00 5c ca 59 .Z....E.O....\.Y 00:22:29.489 00000130 dd 51 3d 32 a5 54 f4 e9 42 14 85 9e b3 ad 98 a8 .Q=2.T..B....... 00:22:29.489 00000140 30 a3 ce 95 2a 0a f2 d0 22 32 a0 74 4e 06 7f 47 0...*..."2.tN..G 00:22:29.489 00000150 cd 55 68 50 fb 3a 04 02 40 e4 ae 9e f9 4a cf 37 .UhP.:..@....J.7 00:22:29.489 00000160 85 86 1b 76 9d 56 d2 84 ed 3f a0 30 a9 50 59 95 ...v.V...?.0.PY. 00:22:29.489 00000170 71 e8 9c 68 3a 20 60 0e 81 cd c3 ff 71 a4 de 2e q..h: `.....q... 00:22:29.489 host pubkey: 00:22:29.489 00000000 1f c1 c1 01 ea 39 5c bc d4 e4 00 c9 e9 6a c4 9b .....9\......j.. 00:22:29.489 00000010 8e 29 c9 38 db b9 0d ac fa 03 f7 8d fc bf 85 d0 .).8............ 00:22:29.489 00000020 4e 2a 85 b3 62 17 25 04 a6 9d 6b 88 5e fb d2 fa N*..b.%...k.^... 00:22:29.489 00000030 c9 6e 91 69 23 2e ce c3 19 b8 f5 08 01 d0 2d 8d .n.i#.........-. 00:22:29.489 00000040 db 62 ae 1e 2e 3b 2c 26 3c 63 00 5c 28 97 bd 4b .b...;,&., 00:22:29.490 000000f0 7c 78 45 ac 70 4a df e6 7e fc 17 03 96 a5 39 70 |xE.pJ..~.....9p 00:22:29.490 00000100 b4 23 d9 25 83 0f 25 ef 23 bb da b7 c8 e9 62 12 .#.%..%.#.....b. 00:22:29.490 00000110 e7 b2 cb 4f ed 5d bf 48 cc 37 af eb 7b 7d 89 b7 ...O.].H.7..{}.. 00:22:29.490 00000120 d1 5a e1 eb 8c b9 45 be 4f f8 09 c0 00 5c ca 59 .Z....E.O....\.Y 00:22:29.490 00000130 dd 51 3d 32 a5 54 f4 e9 42 14 85 9e b3 ad 98 a8 .Q=2.T..B....... 00:22:29.490 00000140 30 a3 ce 95 2a 0a f2 d0 22 32 a0 74 4e 06 7f 47 0...*..."2.tN..G 00:22:29.490 00000150 cd 55 68 50 fb 3a 04 02 40 e4 ae 9e f9 4a cf 37 .UhP.:..@....J.7 00:22:29.490 00000160 85 86 1b 76 9d 56 d2 84 ed 3f a0 30 a9 50 59 95 ...v.V...?.0.PY. 00:22:29.490 00000170 71 e8 9c 68 3a 20 60 0e 81 cd c3 ff 71 a4 de 2e q..h: `.....q... 00:22:29.490 host pubkey: 00:22:29.490 00000000 59 be b9 be 7b de a6 a7 21 e5 62 02 1f 2f 15 50 Y...{...!.b../.P 00:22:29.490 00000010 4d c8 6a 1c 1a 77 c4 06 88 c1 0c fe 7c f2 b2 00 M.j..w......|... 00:22:29.490 00000020 f8 3f dd 1e da 1d f5 69 1f 34 65 4a fe f0 19 c5 .?.....i.4eJ.... 00:22:29.490 00000030 29 ec 43 b9 20 7d 4b f9 f5 73 60 18 f9 77 7b 6c ).C. }K..s`..w{l 00:22:29.490 00000040 83 04 5e 05 1a e8 82 7c 89 72 d1 56 b2 18 cc d1 ..^....|.r.V.... 00:22:29.490 00000050 e1 f5 6a 7e d6 3a 7f 34 fb b6 cc 45 af ee 93 71 ..j~.:.4...E...q 00:22:29.490 00000060 f7 1d f2 6c 34 12 6a a0 86 5c 27 e5 e8 08 81 6a ...l4.j..\'....j 00:22:29.490 00000070 d1 02 65 6c fd 81 56 19 7f 8d ff 7e 16 51 a8 d4 ..el..V....~.Q.. 00:22:29.490 00000080 ac 6b 56 57 3f b6 32 df 8a 93 6e a8 13 10 f9 5d .kVW?.2...n....] 00:22:29.490 00000090 10 55 c2 bf 6a 5e b5 18 bf be ce 16 96 f8 cd 49 .U..j^.........I 00:22:29.490 000000a0 49 9d ae cd 82 bb 7a 90 e1 89 38 5c 83 83 42 da I.....z...8\..B. 00:22:29.490 000000b0 76 15 ad 87 06 03 a3 78 98 ee 96 ce f0 67 6c 01 v......x.....gl. 00:22:29.490 000000c0 a3 38 04 88 33 8c 58 2e 9d 6a ad 64 32 f2 de 9a .8..3.X..j.d2... 00:22:29.490 000000d0 73 eb da 3f 6d 53 1f a4 23 34 99 a8 bf 53 ea 00 s..?mS..#4...S.. 00:22:29.490 000000e0 75 d3 8c 45 24 9b ab b4 ac c4 f6 8e e7 ec 54 52 u..E$.........TR 00:22:29.490 000000f0 de bb d8 8f 6e ec cf c0 74 8c b7 e0 0e 56 8c 58 ....n...t....V.X 00:22:29.490 00000100 6f 57 81 f5 4c ed 2e a2 7b 91 8f ad 92 0e 89 71 oW..L...{......q 00:22:29.490 00000110 14 64 ec 93 37 be 79 ad 26 f2 11 4a 07 f0 70 6d .d..7.y.&..J..pm 00:22:29.490 00000120 ef 69 51 70 7f 69 08 c9 bb ad 84 6f a6 ca 3d 2a .iQp.i.....o..=* 00:22:29.490 00000130 eb 94 71 f5 e8 b4 07 87 3f fc 7c 5c 0c a8 a4 2b ..q.....?.|\...+ 00:22:29.490 00000140 24 dc ca 70 aa 31 16 b4 22 dd 1d 0f 37 dc 7f 9f $..p.1.."...7... 00:22:29.490 00000150 8f 31 12 fd 1b 7d 02 cb cf c9 62 52 7a 7c e1 56 .1...}....bRz|.V 00:22:29.490 00000160 04 7c c5 db 48 8a 51 34 6d 00 58 c5 a3 32 96 28 .|..H.Q4m.X..2.( 00:22:29.490 00000170 5f 55 11 35 63 30 ec 1d f0 85 bb a6 d0 f0 9c eb _U.5c0.......... 00:22:29.490 dh secret: 00:22:29.490 00000000 9e d8 6c 60 40 ee 3b 2f c2 61 14 22 0c 0e 80 c9 ..l`@.;/.a.".... 00:22:29.490 00000010 cb 9e 1b cb 9d 46 d9 e4 a2 cd 1f 84 9f 57 14 a7 .....F.......W.. 00:22:29.490 00000020 9e 1f 6c cb 63 42 d5 6e d0 86 e7 d0 3f 72 89 29 ..l.cB.n....?r.) 00:22:29.490 00000030 92 ba e1 d4 9a 40 29 d5 03 70 62 4b 5b f4 23 b3 .....@)..pbK[.#. 00:22:29.490 00000040 3e cf d8 f1 5e 49 f7 ce 84 77 1c 7c 46 50 41 e5 >...^I...w.|FPA. 00:22:29.490 00000050 be b2 b3 4d 53 47 ca b8 5d cf 54 81 a4 ce 20 1f ...MSG..].T... . 00:22:29.490 00000060 50 a2 40 7e fb 06 f2 c8 8b c2 69 58 b1 fd 9d 71 P.@~......iX...q 00:22:29.490 00000070 09 01 fe 48 c9 6d 6d 3d c8 42 03 21 a6 fa 7a ca ...H.mm=.B.!..z. 00:22:29.490 00000080 8b 69 df c2 f0 28 c7 07 75 f9 40 4d d4 d4 26 32 .i...(..u.@M..&2 00:22:29.490 00000090 54 98 3d 39 39 8d da 91 ff 06 b7 69 26 31 e0 f8 T.=99......i&1.. 00:22:29.490 000000a0 3c 12 10 9e 35 e2 95 9c ab 2f 78 4f ad 1f b1 7b <...5..../xO...{ 00:22:29.490 000000b0 14 b4 cb 12 c7 c1 5d 54 35 8d 03 3f b6 5e 56 b0 ......]T5..?.^V. 00:22:29.490 000000c0 a1 87 c2 13 32 87 05 83 52 4f 2d cb b5 ac 85 b6 ....2...RO-..... 00:22:29.490 000000d0 6e 7c 0d cf f0 67 04 7c 7f da 11 59 e9 d3 43 ba n|...g.|...Y..C. 00:22:29.490 000000e0 d2 14 30 08 c4 2e 30 88 e6 24 e3 19 0d 6f 09 14 ..0...0..$...o.. 00:22:29.490 000000f0 ea 07 64 cd 3b 91 2f 10 16 56 04 0f 4b 38 dd 72 ..d.;./..V..K8.r 00:22:29.490 00000100 a7 4f c0 56 5b 72 1a c9 9b b0 42 cb 17 e6 48 17 .O.V[r....B...H. 00:22:29.490 00000110 84 01 10 f6 fc 19 0a 14 1e 9e f7 af dc 69 07 8c .............i.. 00:22:29.490 00000120 7c 0a 4c ce da 03 08 37 7b 73 f7 30 1b 75 cb 63 |.L....7{s.0.u.c 00:22:29.490 00000130 4f e1 43 67 5a 6f fb 09 b6 ea 14 59 4f f4 46 be O.CgZo.....YO.F. 00:22:29.490 00000140 23 1c ad 66 e1 be 59 7c 66 7c 2f 4f b1 f7 3c 21 #..f..Y|f|/O..~.P..!. G..N} 00:22:29.491 00000160 3d d6 a1 79 b4 58 c8 0e c9 1b e4 51 a8 67 84 0d =..y.X.....Q.g.. 00:22:29.491 00000170 6b 46 ff 51 f9 ae a5 16 14 42 c8 b6 4c 44 a5 68 kF.Q.....B..LD.h 00:22:29.491 host pubkey: 00:22:29.491 00000000 a9 83 0f 67 73 e3 aa 62 47 22 c1 6b 3e 4b 61 44 ...gs..bG".k>KaD 00:22:29.491 00000010 8b 0c f1 0a 2d 6e 9b 7a be 25 3c 08 9e 61 b9 23 ....-n.z.%<..a.# 00:22:29.491 00000020 5c 61 60 74 ae 83 ca 1e a9 1e f5 03 e4 1b 30 5e \a`t..........0^ 00:22:29.491 00000030 94 d9 49 b6 fb 30 72 47 6e 74 11 91 bf a9 3a 2e ..I..0rGnt....:. 00:22:29.491 00000040 c3 2a 6a 5a 16 16 55 ca dc 78 7a 41 63 36 9a 25 .*jZ..U..xzAc6.% 00:22:29.491 00000050 42 66 82 a5 ce 96 af f8 86 af e0 78 b9 9e 2d d0 Bf.........x..-. 00:22:29.491 00000060 fa 5b 91 43 39 31 7f 2a c3 2e 6b 4c 49 cc 4d 12 .[.C91.*..kLI.M. 00:22:29.491 00000070 f1 cf 74 eb 6f 26 86 3d 60 aa dd 65 48 1f 13 7f ..t.o&.=`..eH... 00:22:29.491 00000080 58 cf 85 00 9a 85 8a 81 c0 fa ad 9b 45 33 d5 f8 X...........E3.. 00:22:29.491 00000090 b3 e4 09 40 53 fe dd 39 05 d3 99 0f e7 1e 40 42 ...@S..9......@B 00:22:29.491 000000a0 69 48 1f c2 39 86 3e 85 25 60 06 f0 4b 66 86 59 iH..9.>.%`..Kf.Y 00:22:29.491 000000b0 0f bb cb 5b fe 49 43 f7 bb 84 b2 dc 9f 0d 25 2a ...[.IC.......%* 00:22:29.491 000000c0 6a 83 1f 22 74 40 57 3c a8 6f d8 c4 fd e4 14 cf j.."t@W<.o...... 00:22:29.491 000000d0 27 16 51 e8 6f 20 43 ab d6 81 cd f5 5f 06 14 ed '.Q.o C....._... 00:22:29.491 000000e0 ce 81 6a 56 d9 7c 30 4b 13 da 53 a5 38 ce 43 53 ..jV.|0K..S.8.CS 00:22:29.491 000000f0 bc 5b b9 f1 a9 1f b7 8d 9a 39 63 9c cb 0f 7a 70 .[.......9c...zp 00:22:29.491 00000100 34 45 1d 0a 19 0f 27 d5 55 52 db e3 5b 14 56 2c 4E....'.UR..[.V, 00:22:29.491 00000110 a3 13 db 62 4a eb 02 06 e2 ff f7 cc 48 b8 a4 35 ...bJ.......H..5 00:22:29.491 00000120 99 df f6 8b e1 f3 80 d5 53 aa 53 0c 70 70 ff e6 ........S.S.pp.. 00:22:29.491 00000130 d5 55 33 7f a7 be f5 9d b6 b2 0a e6 26 8f ba 60 .U3.........&..` 00:22:29.491 00000140 44 80 62 95 46 85 3e b2 eb d1 4f 50 b9 fc 1d b7 D.b.F.>...OP.... 00:22:29.491 00000150 77 ee 6a 39 86 5a 86 8c 15 8d fc 12 81 90 4c 46 w.j9.Z........LF 00:22:29.491 00000160 31 fe 42 d0 a4 fb 0d 6b 64 09 7e 82 53 53 89 fd 1.B....kd.~.SS.. 00:22:29.491 00000170 1f 6b 0d ba 6e 58 16 6a a5 70 2a 85 0c a0 c4 64 .k..nX.j.p*....d 00:22:29.491 dh secret: 00:22:29.491 00000000 2d 11 e6 51 99 66 d8 8e a8 98 23 1b 72 58 32 06 -..Q.f....#.rX2. 00:22:29.491 00000010 f6 de fa fd e4 7b ee c6 fa 6a 9a 3a 8a 32 e8 6b .....{...j.:.2.k 00:22:29.491 00000020 3e f2 fd b1 41 8c 52 70 fc 8f 71 93 e8 03 24 a5 >...A.Rp..q...$. 00:22:29.491 00000030 f1 e0 55 c1 70 db 09 59 31 97 9e eb 05 65 24 f0 ..U.p..Y1....e$. 00:22:29.491 00000040 75 51 48 85 11 e5 4e af 5d ab 0b 44 30 ce 38 dd uQH...N.]..D0.8. 00:22:29.491 00000050 46 57 af 1e 37 a7 f5 ae 61 64 34 19 e9 7b 65 d1 FW..7...ad4..{e. 00:22:29.491 00000060 ea b1 16 c1 1f ee 23 70 d1 d7 bd bb 0b fb d0 86 ......#p........ 00:22:29.491 00000070 75 93 a5 81 03 53 dc ef e6 5d 0d 89 83 fb 36 1b u....S...]....6. 00:22:29.491 00000080 9c f1 12 7b a2 ef 5e ef c4 8a 2c 86 26 ce d9 d0 ...{..^...,.&... 00:22:29.491 00000090 03 bd 36 1f 73 86 ea 6b 97 b8 8c 5b 65 4d 46 1c ..6.s..k...[eMF. 00:22:29.491 000000a0 e2 1f 56 f5 43 1d dc 06 79 60 19 50 41 f0 d9 ed ..V.C...y`.PA... 00:22:29.491 000000b0 dc 42 24 7a d7 09 7a 3a 2c aa 90 8e 60 a7 84 68 .B$z..z:,...`..h 00:22:29.491 000000c0 94 aa bb 66 13 ae b2 9f fb 20 bd af da fd a4 96 ...f..... ...... 00:22:29.491 000000d0 82 42 cc 28 4e bf f9 7c 53 44 f3 9b cc 76 89 aa .B.(N..|SD...v.. 00:22:29.491 000000e0 5e ea 3f 27 95 49 4e 05 ed 52 64 3f 2d e4 5e ac ^.?'.IN..Rd?-.^. 00:22:29.491 000000f0 f2 db eb e5 a3 ef 69 ad 91 31 b1 24 e2 38 b6 94 ......i..1.$.8.. 00:22:29.491 00000100 f4 48 60 01 1e be b5 9b 5c 06 22 99 fa 6a 66 c1 .H`.....\."..jf. 00:22:29.491 00000110 5b f9 9b 1e 00 1d 90 69 e8 73 e3 9f 75 10 c9 f7 [......i.s..u... 00:22:29.491 00000120 e2 51 c0 18 71 49 94 e5 f1 c0 f8 f2 03 29 67 cd .Q..qI.......)g. 00:22:29.491 00000130 d0 df cd c1 80 31 fe 04 2c 43 d4 d1 bc c1 d9 4c .....1..,C.....L 00:22:29.491 00000140 56 a8 2f 72 f1 1b df e3 b4 2f 25 8e 13 73 da ca V./r...../%..s.. 00:22:29.491 00000150 e5 19 28 5d ec 7b 06 c4 ec 45 0e 86 09 07 2c db ..(].{...E....,. 00:22:29.491 00000160 8e 33 13 af 27 69 e7 88 17 28 a3 93 04 60 bc 41 .3..'i...(...`.A 00:22:29.491 00000170 e7 87 77 d8 03 df 58 15 4f 3d ee fc 4c 10 00 72 ..w...X.O=..L..r 00:22:29.491 [2024-09-27 13:27:13.507498] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=2, dhgroup=2, seq=3775755237, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.491 [2024-09-27 13:27:13.507802] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.491 [2024-09-27 13:27:13.515310] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.491 [2024-09-27 13:27:13.515608] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.491 [2024-09-27 13:27:13.515930] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.491 [2024-09-27 13:27:13.516151] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.491 [2024-09-27 13:27:13.567954] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.491 [2024-09-27 13:27:13.568149] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.491 [2024-09-27 13:27:13.568259] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.491 [2024-09-27 13:27:13.568370] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.491 [2024-09-27 13:27:13.568620] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.491 ctrlr pubkey: 00:22:29.491 00000000 44 b2 0c 42 25 fb 76 07 94 ac af af dd 4f 61 37 D..B%.v......Oa7 00:22:29.491 00000010 95 41 c3 61 15 f4 08 64 dc 96 40 57 44 81 dd c0 .A.a...d..@WD... 00:22:29.491 00000020 ea 0c 7d 64 5f 81 a8 7c 7a d9 53 0c 4b 00 f7 68 ..}d_..|z.S.K..h 00:22:29.491 00000030 d2 88 81 65 85 b7 15 6d c1 93 8f ff 77 02 b7 6a ...e...m....w..j 00:22:29.491 00000040 11 e7 45 df 0c 9c fc 72 47 9a d1 24 5f 5d 03 60 ..E....rG..$_].` 00:22:29.491 00000050 47 6b 6b 40 ef c3 36 e7 ce 42 9a 78 d3 43 bf d3 Gkk@..6..B.x.C.. 00:22:29.491 00000060 5c f5 8f 0f 91 3d 53 63 fc 5c 69 db 58 9a bd 32 \....=Sc.\i.X..2 00:22:29.491 00000070 74 99 ec 87 5a bb 2d 09 d2 4e 50 59 45 9b ba 20 t...Z.-..NPYE.. 00:22:29.491 00000080 2c a0 3c eb b9 9f 78 34 35 3f 90 6a 63 6c 1e 53 ,.<...x45?.jcl.S 00:22:29.491 00000090 e3 60 d4 dc 99 8e e0 46 6e b7 85 98 6d 2e 91 02 .`.....Fn...m... 00:22:29.491 000000a0 9e da e1 61 28 6f 63 ea f3 41 e1 8a bb 2a 4d e9 ...a(oc..A...*M. 00:22:29.491 000000b0 01 d5 cb da b8 49 dc f1 15 89 db 13 98 d0 18 5b .....I.........[ 00:22:29.491 000000c0 34 4f 7c f4 a5 48 f0 bc 47 1f 22 d4 4b 36 d8 af 4O|..H..G.".K6.. 00:22:29.491 000000d0 7c 78 98 07 9c ba 04 01 a4 e7 6b ba 8a 45 c8 7f |x........k..E.. 00:22:29.491 000000e0 da 58 03 12 5d 3b 2f 84 f3 44 31 66 51 b8 14 28 .X..];/..D1fQ..( 00:22:29.491 000000f0 6c e0 55 21 bd b5 a7 17 c9 7b dc 5a 20 fe 28 c5 l.U!.....{.Z .(. 00:22:29.491 00000100 0e 21 30 ce da e5 d7 a4 7a 61 7a 87 7c 8a 6c eb .!0.....zaz.|.l. 00:22:29.491 00000110 50 a3 31 25 b5 ab 1b 36 b8 96 cd 51 fa 13 29 bb P.1%...6...Q..). 00:22:29.491 00000120 e3 d8 00 62 2b 46 ab 12 99 58 bb 89 e3 a9 54 80 ...b+F...X....T. 00:22:29.491 00000130 6f 2b 7f 65 c8 71 a2 25 09 d7 a4 d3 4f d0 d1 b2 o+.e.q.%....O... 00:22:29.491 00000140 06 67 a0 92 5f 85 ba 4d 5c 7a 63 83 2e b4 64 e6 .g.._..M\zc...d. 00:22:29.491 00000150 7e b3 3e 7e f5 50 bf a0 21 b9 20 47 f9 ef 4e 7d ~.>~.P..!. G..N} 00:22:29.491 00000160 3d d6 a1 79 b4 58 c8 0e c9 1b e4 51 a8 67 84 0d =..y.X.....Q.g.. 00:22:29.491 00000170 6b 46 ff 51 f9 ae a5 16 14 42 c8 b6 4c 44 a5 68 kF.Q.....B..LD.h 00:22:29.491 host pubkey: 00:22:29.491 00000000 23 40 37 5d 57 8a 05 cf 26 fa cb 20 e5 e4 25 e4 #@7]W...&.. ..%. 00:22:29.491 00000010 55 9d c5 98 d6 fa 6d de 53 03 1a 39 42 fe 2e 0d U.....m.S..9B... 00:22:29.491 00000020 6b 07 fc d2 8b 13 6e 36 50 81 a7 10 85 70 3b bc k.....n6P....p;. 00:22:29.491 00000030 2f c6 3d 67 c6 9d ff 0b 44 69 28 4e b5 cf ed 01 /.=g....Di(N.... 00:22:29.491 00000040 1f 1d 44 98 72 02 2d 00 67 65 79 7e 48 d2 2d 1c ..D.r.-.gey~H.-. 00:22:29.491 00000050 d3 1f d9 35 5a 33 3a bf 0e 42 84 25 e8 6e 7b 69 ...5Z3:..B.%.n{i 00:22:29.491 00000060 ef d1 2b 0e 5a b2 b6 d7 9c f1 60 72 88 cd 4a 12 ..+.Z.....`r..J. 00:22:29.491 00000070 88 b8 f9 fc dc d5 82 be e6 f8 35 71 69 cd 6d bb ..........5qi.m. 00:22:29.491 00000080 a0 1a 2c 4e 27 e4 fb 88 33 01 f7 0d a7 f4 dd 4c ..,N'...3......L 00:22:29.491 00000090 5f 15 bd 69 35 2a 57 0c 4b 93 86 0d a2 ac ab 49 _..i5*W.K......I 00:22:29.491 000000a0 c3 99 7b 43 23 32 74 e3 2d f9 79 64 f6 b1 23 39 ..{C#2t.-.yd..#9 00:22:29.491 000000b0 63 89 eb fd 22 6b 5b 52 44 10 f1 1d c4 ff 8e 2c c..."k[RD......, 00:22:29.491 000000c0 38 94 65 0d 0b 84 e6 2e 6b bc 01 70 8c 93 84 b1 8.e.....k..p.... 00:22:29.491 000000d0 60 c4 9c 75 eb 97 0f c5 7c 1b 04 c7 c7 60 db ff `..u....|....`.. 00:22:29.491 000000e0 d3 83 1e eb a4 14 d5 e2 7a 1b c3 36 f3 f6 2f 4a ........z..6../J 00:22:29.491 000000f0 5e 40 f2 8a 69 0b 45 5c 94 48 f8 b4 b3 c3 ab 1c ^@..i.E\.H...... 00:22:29.491 00000100 b9 02 b7 17 02 fd 37 52 b5 90 1a 7a ce 99 07 d1 ......7R...z.... 00:22:29.491 00000110 79 fb e0 c3 c1 7c ed df 06 b8 24 89 f5 5d 83 4c y....|....$..].L 00:22:29.491 00000120 81 ab 7a dc 40 a0 70 92 a2 e3 c8 2c 26 4d 26 a4 ..z.@.p....,&M&. 00:22:29.491 00000130 2c 97 bf 44 60 2b b7 1b a9 fe 61 74 08 0a 21 24 ,..D`+....at..!$ 00:22:29.491 00000140 5e 37 f6 ec bd b4 bf c3 27 09 6e a2 86 d6 22 8f ^7......'.n...". 00:22:29.491 00000150 a4 14 fc 0e 23 9f 76 49 e8 aa ec 50 22 3b e5 c1 ....#.vI...P";.. 00:22:29.491 00000160 80 e3 a8 7f 30 f0 e8 8d e2 4f 7a 23 ff d5 30 b3 ....0....Oz#..0. 00:22:29.491 00000170 8a 35 60 c3 f1 64 ed 81 d4 3e 15 5c a6 be 62 34 .5`..d...>.\..b4 00:22:29.491 dh secret: 00:22:29.491 00000000 6a cc 46 3e ef 03 f9 14 75 40 30 75 16 54 ef cb j.F>....u@0u.T.. 00:22:29.491 00000010 67 1b 70 b4 f4 e9 2b 99 ab 3a eb 48 f2 e8 b5 38 g.p...+..:.H...8 00:22:29.491 00000020 ee fc 5c 0e 7f 10 30 22 9b a9 d7 36 a7 31 e6 75 ..\...0"...6.1.u 00:22:29.491 00000030 98 07 cf 6e fb a2 c4 fe a9 be e5 01 ff 61 31 2a ...n.........a1* 00:22:29.491 00000040 a3 0f d4 0a 11 5b 7f 72 a0 ad fe 09 5f 54 d6 2c .....[.r...._T., 00:22:29.491 00000050 93 18 1d 25 ae c1 92 b3 1b 2a 64 46 66 b8 3e 47 ...%.....*dFf.>G 00:22:29.491 00000060 3e 6b 0c e1 bf 0b dc c3 78 eb 3a 06 34 5e 3a 16 >k......x.:.4^:. 00:22:29.491 00000070 dd 79 a0 63 f2 54 99 bc fd 11 9a 58 56 98 a6 c4 .y.c.T.....XV... 00:22:29.491 00000080 3d 0f 83 df d0 e2 52 23 f3 90 94 01 bc 52 ea 52 =.....R#.....R.R 00:22:29.491 00000090 d5 7c d6 db db 5e 04 63 b8 27 ed 75 ba b7 df e2 .|...^.c.'.u.... 00:22:29.491 000000a0 51 8e e4 43 65 62 40 28 eb 1f 8d 81 69 e7 be 91 Q..Ceb@(....i... 00:22:29.491 000000b0 61 94 28 7c 95 24 3a ff 48 a9 82 78 11 f6 0b fa a.(|.$:.H..x.... 00:22:29.492 000000c0 0a 77 28 cd eb 63 48 73 a6 33 43 13 b9 5e c6 32 .w(..cHs.3C..^.2 00:22:29.492 000000d0 0a e5 3c 17 a7 d8 f0 af 3c 42 a5 02 20 f5 0a bf ..<.....~...>FQY\.N 00:22:29.492 00000130 52 4c ef b3 cc 46 7a 1e 84 fb 85 ae 34 12 02 a5 RL...Fz.....4... 00:22:29.492 00000140 e0 71 39 bb 31 81 e7 f1 1a a7 35 d3 50 09 92 a9 .q9.1.....5.P... 00:22:29.492 00000150 23 a2 46 8f 4b 85 b2 5d 0f 7a d6 52 7d 51 69 9e #.F.K..].z.R}Qi. 00:22:29.492 00000160 97 49 d7 91 df 8d 91 9e 1c 7e 46 90 ae 99 8a e7 .I.......~F..... 00:22:29.492 00000170 8a 46 1b 1b b5 ed a8 15 b0 a2 df 00 89 89 17 a1 .F.............. 00:22:29.492 dh secret: 00:22:29.492 00000000 fc 39 5c c9 18 75 6a 08 e6 a9 57 5f b8 76 47 a8 .9\..uj...W_.vG. 00:22:29.492 00000010 7d 3f 19 11 58 1a 96 d0 bd 36 56 21 2a 80 75 03 }?..X....6V!*.u. 00:22:29.492 00000020 80 90 b2 95 6c 7d 95 21 bd 4d 8f 5a 22 29 a4 6a ....l}.!.M.Z").j 00:22:29.492 00000030 85 a2 64 48 3f d2 fb b9 9b e4 37 5a f6 0b 40 67 ..dH?.....7Z..@g 00:22:29.492 00000040 86 94 a9 18 0d 1b 39 1c c9 24 de 5f b0 2e 7c e1 ......9..$._..|. 00:22:29.492 00000050 98 a3 8e 73 1c 02 6e c6 8c 32 6b 55 66 a3 07 d9 ...s..n..2kUf... 00:22:29.492 00000060 e4 97 73 69 02 bc f5 8f 15 dc c8 d9 53 97 ca b1 ..si........S... 00:22:29.492 00000070 15 e6 5d df 50 1d ce dc bf a5 aa d6 5d 8d 1e 00 ..].P.......]... 00:22:29.492 00000080 3b 22 bf b9 fa 92 5c 18 29 ca ac ef c0 5c 8a 1e ;"....\.)....\.. 00:22:29.492 00000090 b8 b8 27 8e 63 1c ae 8c aa b7 f7 ef cd 0e a7 8c ..'.c........... 00:22:29.492 000000a0 4f 06 32 78 98 b4 d1 25 4b 81 71 b4 8a 61 16 b3 O.2x...%K.q..a.. 00:22:29.492 000000b0 cc 61 a2 82 6f b3 2b 78 80 e9 df 78 05 61 4f 5a .a..o.+x...x.aOZ 00:22:29.492 000000c0 bd 3e 63 bc d4 50 63 44 ac 28 ab cf 71 6d ca 60 .>c..PcD.(..qm.` 00:22:29.492 000000d0 9e 0c d9 e2 29 5e 68 5c 05 d0 69 64 de 25 cd ee ....)^h\..id.%.. 00:22:29.492 000000e0 ad a4 c0 72 78 0f a9 a0 e4 bc a4 f2 33 ad 5d 88 ...rx.......3.]. 00:22:29.492 000000f0 d7 76 9c 97 31 e3 2d e8 15 1d a5 2e 7e 5e 65 f2 .v..1.-.....~^e. 00:22:29.492 00000100 ed 2b 04 27 1e a7 f2 16 ee bf 6d d6 6f 03 c6 75 .+.'......m.o..u 00:22:29.492 00000110 3f 9c 1d 2a 42 7b e9 eb 72 1c 87 c7 e2 7f cd 23 ?..*B{..r......# 00:22:29.492 00000120 ac f7 c1 31 d4 e1 9a a2 da b1 19 74 94 51 ac 7d ...1.......t.Q.} 00:22:29.492 00000130 0c 31 dd 97 d5 73 a2 f1 0e f8 ba 8a 80 2a 0d c1 .1...s.......*.. 00:22:29.492 00000140 09 04 13 e6 ed a3 8c 2c 26 f6 1f ce 72 05 e0 9b .......,&...r... 00:22:29.492 00000150 9d 2b 2f 9c 43 0a 0a 27 4c 3a f0 8c 04 38 a7 c3 .+/.C..'L:...8.. 00:22:29.492 00000160 4e 38 33 c1 24 b5 4c 41 c8 ee 2d 47 68 e5 e6 41 N83.$.LA..-Gh..A 00:22:29.492 00000170 5f 29 11 eb 6d dd c3 18 fd 04 46 21 53 ea 83 64 _)..m.....F!S..d 00:22:29.492 [2024-09-27 13:27:13.706478] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key3, hash=2, dhgroup=2, seq=3775755239, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.492 [2024-09-27 13:27:13.706828] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.492 [2024-09-27 13:27:13.714474] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.492 [2024-09-27 13:27:13.714880] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.492 [2024-09-27 13:27:13.715099] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.492 [2024-09-27 13:27:13.715446] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.492 [2024-09-27 13:27:13.767240] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.492 [2024-09-27 13:27:13.767454] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.492 [2024-09-27 13:27:13.767668] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.492 [2024-09-27 13:27:13.767857] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.492 [2024-09-27 13:27:13.768076] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.492 ctrlr pubkey: 00:22:29.492 00000000 67 a2 02 30 d3 f2 f8 ad 9e ab 0e ec 27 1f 72 43 g..0........'.rC 00:22:29.492 00000010 ec b2 93 1c 94 ca 7a a7 d7 24 5b ca 20 c0 b2 86 ......z..$[. ... 00:22:29.492 00000020 f7 f5 ea 21 59 b0 da 0b db ad ff fc 6b 15 bb 2d ...!Y.......k..- 00:22:29.492 00000030 22 95 48 de 10 f0 f6 ee 6f c2 90 84 36 10 25 bc ".H.....o...6.%. 00:22:29.492 00000040 d5 df 91 6a 94 81 94 0e f0 97 98 14 60 49 d2 53 ...j........`I.S 00:22:29.492 00000050 e0 06 42 05 81 ae b4 46 e7 e8 bc d2 c6 07 7a 57 ..B....F......zW 00:22:29.492 00000060 90 b1 1c 23 2f bf 72 3d 86 e0 7e 64 d7 c6 49 4b ...#/.r=..~d..IK 00:22:29.492 00000070 44 7a 56 e3 0c 33 20 35 77 52 1b 4c a7 64 d4 88 DzV..3 5wR.L.d.. 00:22:29.492 00000080 64 57 bb da 6a e1 a2 18 c8 f8 31 5a 9d d7 5e 71 dW..j.....1Z..^q 00:22:29.492 00000090 aa 08 80 44 27 ce 59 2b 79 4f 6c bc d5 4d 9e a8 ...D'.Y+yOl..M.. 00:22:29.492 000000a0 23 c3 3b 9e 4f f4 45 bb d7 9e 48 c5 7e 31 85 f6 #.;.O.E...H.~1.. 00:22:29.492 000000b0 75 2b 75 1a d0 f5 6e 3f a5 ae d7 0e 4a 65 1a 7a u+u...n?....Je.z 00:22:29.492 000000c0 58 ac 8e e1 7b ce c2 4b 37 57 72 52 44 f4 92 c8 X...{..K7WrRD... 00:22:29.492 000000d0 53 7f 79 93 b6 c5 df 71 cd 8e 85 b8 4c eb 60 8a S.y....q....L.`. 00:22:29.492 000000e0 7e 9f c0 2b 4a b0 b4 03 28 e0 79 de 6a e2 a4 09 ~..+J...(.y.j... 00:22:29.493 000000f0 25 a9 59 ce b2 2f 0a 34 33 78 0b c7 5c ea 08 c7 %.Y../.43x..\... 00:22:29.493 00000100 7a a2 e3 18 b7 a9 0b 8c 77 ab c3 24 bf 3f 54 36 z.......w..$.?T6 00:22:29.493 00000110 45 ad 5e 00 ca be d2 2f 8e ae 7c c4 6d 5f a6 3c E.^..../..|.m_.< 00:22:29.493 00000120 2b e1 76 8e 7b b9 aa 66 f0 26 ce 27 b1 0f 8c fe +.v.{..f.&.'.... 00:22:29.493 00000130 c9 79 22 62 5e ec 76 9a 97 e2 a7 83 b3 1c 88 f4 .y"b^.v......... 00:22:29.493 00000140 2c 59 e1 9d bd f4 71 9d 32 15 79 cd c2 31 05 d3 ,Y....q.2.y..1.. 00:22:29.493 00000150 4a 62 d6 e2 24 12 b0 55 37 5a f5 c9 6f f1 01 a3 Jb..$..U7Z..o... 00:22:29.493 00000160 ea 8f d0 68 f9 71 86 6e a1 3c 24 7d 23 8c 7c 62 ...h.q.n.<$}#.|b 00:22:29.493 00000170 cd 8d 16 a1 2a ad e1 0b cb 17 be 9b 41 ba 5a 41 ....*.......A.ZA 00:22:29.493 host pubkey: 00:22:29.493 00000000 7e c0 fa 49 2f 60 f9 79 15 e9 be 62 55 4f 78 f1 ~..I/`.y...bUOx. 00:22:29.493 00000010 4f 80 d9 3d 4c 94 51 77 03 35 18 eb e5 32 58 3a O..=L.Qw.5...2X: 00:22:29.493 00000020 73 97 df fb ce 90 e0 ce aa b7 4e d6 42 3f 7e 37 s.........N.B?~7 00:22:29.493 00000030 74 f5 34 96 1b 7b 45 9f af db 0e 68 f1 6b 27 bd t.4..{E....h.k'. 00:22:29.493 00000040 28 40 cd 92 2f 64 2d 8a f1 c4 3d fe 79 3d 68 3c (@../d-...=.y=h< 00:22:29.493 00000050 83 ed d5 e2 a4 20 97 a5 04 6d 7a d4 5a 91 0c 93 ..... ...mz.Z... 00:22:29.493 00000060 98 3b 9c 39 3a 32 77 c9 ee 85 02 92 ae 1b b2 02 .;.9:2w......... 00:22:29.493 00000070 ea 24 6d ff 6a 3c 07 5e a1 e8 c3 2d 40 3a f8 a8 .$m.j<.^...-@:.. 00:22:29.493 00000080 5d 78 e8 0b 8e b4 d8 d5 ef c3 b4 c1 f9 6a d5 15 ]x...........j.. 00:22:29.493 00000090 81 42 83 d1 5c e7 69 d0 1f 81 a9 ea f4 ca bf 8d .B..\.i......... 00:22:29.493 000000a0 42 bc 0b 79 da 8f bb 2d c5 3a cf 7b 5e 6b e8 27 B..y...-.:.{^k.' 00:22:29.493 000000b0 42 4e d7 52 31 52 86 19 bf e7 6a ce df b8 46 c1 BN.R1R....j...F. 00:22:29.493 000000c0 1a 83 1c f0 3e 33 a0 dc 08 7e d1 8a 1a 46 ce bb ....>3...~...F.. 00:22:29.493 000000d0 8e ec 04 08 69 f8 d3 f7 dd b6 26 97 36 5e c4 ff ....i.....&.6^.. 00:22:29.493 000000e0 ee 64 fa b0 a5 80 d7 e6 1c d9 04 bf a0 59 0f 42 .d...........Y.B 00:22:29.493 000000f0 6e 7f f9 e2 98 2d 26 4f 16 46 f7 f0 95 43 58 13 n....-&O.F...CX. 00:22:29.493 00000100 0d 4d 24 4f 67 27 e7 ce d4 d3 50 2e 6e 16 d3 2a .M$Og'....P.n..* 00:22:29.493 00000110 c4 96 14 b7 b3 4e 25 ea 83 bf 5f 5b 7c b5 90 81 .....N%..._[|... 00:22:29.493 00000120 79 0e 47 14 23 48 f5 99 90 b2 d9 e5 20 25 cc ba y.G.#H...... %.. 00:22:29.493 00000130 2e 68 61 5b 59 b0 51 9a 46 50 ae 49 13 04 ce 2a .ha[Y.Q.FP.I...* 00:22:29.493 00000140 f6 55 a2 fa 15 22 90 86 14 ad 6d 31 a3 79 d9 6b .U..."....m1.y.k 00:22:29.493 00000150 5e eb b2 1b 37 82 14 81 9b 57 99 04 ce 33 4d 71 ^...7....W...3Mq 00:22:29.493 00000160 69 20 1d 84 51 f6 92 b8 38 2e 3a d7 80 36 b4 4d i ..Q...8.:..6.M 00:22:29.493 00000170 0b 49 a1 ab 86 6d ee 82 ff b7 07 17 12 73 56 49 .I...m.......sVI 00:22:29.493 dh secret: 00:22:29.493 00000000 24 4a e0 86 43 a3 83 6c 7c 30 af f8 3b 91 6d 47 $J..C..l|0..;.mG 00:22:29.493 00000010 88 e8 e9 86 e8 1d b8 6a 05 53 b0 e4 50 3e df 4c .......j.S..P>.L 00:22:29.493 00000020 ad 57 3a 35 f5 45 c5 5f 92 39 30 1a b4 eb ff 43 .W:5.E._.90....C 00:22:29.493 00000030 96 70 d5 89 42 20 8e 8b 6d fb b9 da 0a 39 77 4a .p..B ..m....9wJ 00:22:29.493 00000040 44 ce a0 b8 52 8f 5d 94 cb 0d 6a 0f 75 e7 70 b8 D...R.]...j.u.p. 00:22:29.493 00000050 7d 07 dd 9b 4c a7 0c be 1a 74 21 f8 dc 0c 43 5c }...L....t!...C\ 00:22:29.493 00000060 dc 1c a9 d4 50 35 0d d2 bd 1c 79 2b e7 56 6e 99 ....P5....y+.Vn. 00:22:29.493 00000070 f5 61 b6 9b 30 b9 c2 b2 33 54 1c 5b 14 da 3c 4f .a..0...3T.[... 00:22:29.493 000000b0 09 9e 82 99 d2 09 12 f7 64 77 78 dd 37 e8 4c af ........dwx.7.L. 00:22:29.493 000000c0 48 de 6c 0a db 64 e6 7a 41 68 97 76 ff 32 1e ec H.l..d.zAh.v.2.. 00:22:29.493 000000d0 9e 14 af 7b c0 04 61 c0 41 b7 b5 6c b4 46 4f 3e ...{..a.A..l.FO> 00:22:29.493 000000e0 98 a3 ad 3a 74 f8 e8 0a e6 a3 ef 53 4a 24 71 31 ...:t......SJ$q1 00:22:29.493 000000f0 9e 86 97 0f 9f 7e 98 11 e3 e3 ce 56 7c 06 39 c1 .....~.....V|.9. 00:22:29.493 00000100 3c 71 81 db e7 80 fa d2 e5 b7 df 00 b9 83 b9 da ......2... 00:22:29.493 00000050 b1 1f 77 12 65 e7 87 80 65 94 cc 30 eb be c3 a5 ..w.e...e..0.... 00:22:29.493 00000060 92 b6 18 6d 59 bb 38 ee df 2c 67 87 38 c6 85 d3 ...mY.8..,g.8... 00:22:29.493 00000070 f0 b8 b8 4c 79 a8 50 f3 c1 be c1 82 8a 8f 99 f0 ...Ly.P......... 00:22:29.493 00000080 2f 14 82 c4 6c 48 ff b5 59 cb 7d 62 78 2a 7f a7 /...lH..Y.}bx*.. 00:22:29.493 00000090 61 b3 f5 1f 9b f0 d6 45 71 30 40 12 e3 f2 1c 8d a......Eq0@..... 00:22:29.493 000000a0 23 62 d6 5f 65 ad 10 09 8b b0 42 4c 35 48 2a dc #b._e.....BL5H*. 00:22:29.493 000000b0 f2 e6 a6 64 20 b2 e7 05 ec c0 ff 88 36 0d b7 ca ...d .......6... 00:22:29.493 000000c0 6d f1 b6 72 b9 34 5e bc fd 17 db 3f 23 f1 c3 3c m..r.4^....?#..< 00:22:29.493 000000d0 2c 76 a1 89 ec 3a eb 40 e4 83 54 e5 ef b7 8f cd ,v...:.@..T..... 00:22:29.493 000000e0 ee ce a5 02 3e 33 7b 4e ac 23 99 3d 65 ed 72 8b ....>3{N.#.=e.r. 00:22:29.493 000000f0 70 b1 38 57 cf 12 94 e8 b5 b5 91 20 60 a4 5e 2b p.8W....... `.^+ 00:22:29.493 00000100 1d a9 99 89 b5 9b 73 00 46 9b 77 05 f8 6c 25 37 ......s.F.w..l%7 00:22:29.493 00000110 5a 18 8f 88 45 bc 2f cb ac 39 ba 10 4a a8 f4 f0 Z...E./..9..J... 00:22:29.493 00000120 fa c3 4a 75 f5 e9 14 14 46 2d 07 dd e8 cc 29 1d ..Ju....F-....). 00:22:29.493 00000130 b5 25 75 2d a0 9a 5c da 7c bd b8 20 06 4e 8e 0f .%u-..\.|.. .N.. 00:22:29.493 00000140 78 07 43 1f 6c c2 86 c4 ec 2c f8 72 ca bf af a1 x.C.l....,.r.... 00:22:29.493 00000150 17 52 e3 f3 f9 6a ff 88 dd 6c 3e 88 25 26 09 67 .R...j...l>.%&.g 00:22:29.493 00000160 2d cf 23 57 cf 43 f9 8f 02 60 f3 c8 19 3d b3 49 -.#W.C...`...=.I 00:22:29.493 00000170 6f 48 16 49 de 55 ab 54 68 a4 b8 b0 0f 01 87 82 oH.I.U.Th....... 00:22:29.493 host pubkey: 00:22:29.493 00000000 76 63 9e 9f 64 cb b5 60 61 40 f4 ac f0 2e b5 1e vc..d..`a@...... 00:22:29.493 00000010 d5 27 dc fd e6 8c 29 30 3a 5a f0 4f 04 ca 96 45 .'....)0:Z.O...E 00:22:29.493 00000020 a5 dd b3 e3 42 c9 62 b6 be c7 ec 3f df 7f f2 38 ....B.b....?...8 00:22:29.493 00000030 25 f2 cb d4 65 86 0c 4b 81 6f 99 d1 a3 f5 69 35 %...e..K.o....i5 00:22:29.493 00000040 f4 2f 62 31 68 ef 86 32 c0 c9 ee 6c 9e 53 84 3e ./b1h..2...l.S.> 00:22:29.493 00000050 dc 51 fc 22 8b cd e4 ec e1 2e 70 75 87 a7 ef ad .Q."......pu.... 00:22:29.493 00000060 21 f3 b3 2e e3 99 f9 7f 26 cf c8 1d 7b 46 9c be !.......&...{F.. 00:22:29.493 00000070 07 50 7d 42 af b1 95 25 68 93 92 20 12 d7 06 01 .P}B...%h.. .... 00:22:29.493 00000080 8e 48 44 36 8d f3 4a da 95 e6 86 e0 ef 50 cf 74 .HD6..J......P.t 00:22:29.493 00000090 c2 0a 8a 01 49 c2 43 ef 15 b3 54 4a 38 3c 6c 10 ....I.C...TJ8...e 00:22:29.494 00000020 f6 f5 66 49 8c ce 90 ab cd 43 21 f3 bd 89 9e c0 ..fI.....C!..... 00:22:29.494 00000030 2c 13 15 48 ab c9 47 1f d2 60 e3 d3 07 4b 7d 85 ,..H..G..`...K}. 00:22:29.494 00000040 d2 19 c5 03 28 e1 db 35 84 91 ab 4f 1d 81 55 a7 ....(..5...O..U. 00:22:29.494 00000050 e4 89 e1 bc b9 59 2e 33 5f 1c e1 68 1f 9c ba c2 .....Y.3_..h.... 00:22:29.494 00000060 bf 4d a2 8b 85 c2 ef 71 f7 90 fa da 7c e6 22 e4 .M.....q....|.". 00:22:29.494 00000070 59 13 54 11 90 07 ee 3e 90 65 04 04 78 0e 5a 88 Y.T....>.e..x.Z. 00:22:29.494 00000080 9f b6 d2 75 4e 0f 0b 07 76 d6 ee 96 6b 8d 47 27 ...uN...v...k.G' 00:22:29.494 00000090 e0 8a a2 ca c2 97 1b 16 98 4a 0d 3c d0 4a 41 fc .........J.<.JA. 00:22:29.494 000000a0 f9 9f 38 97 f5 96 fc 11 1f fa e6 2d d0 64 02 23 ..8........-.d.# 00:22:29.494 000000b0 9f ec 85 8d 92 4d 41 6b 8a c8 3e df 83 80 56 e3 .....MAk..>...V. 00:22:29.494 000000c0 4b d1 47 6e 29 a0 dd 61 1b cb 37 3e fe fe 39 b1 K.Gn)..a..7>..9. 00:22:29.494 000000d0 20 b6 05 38 1d 0a b5 df 6d 76 75 4a 92 a9 26 e0 ..8....mvuJ..&. 00:22:29.494 000000e0 c7 5e 79 ad d3 9c 69 3b 68 00 e9 86 69 0b e1 91 .^y...i;h...i... 00:22:29.494 000000f0 2b f6 e6 cc bd e0 ed ec b4 99 61 4c 0a 1f 38 aa +.........aL..8. 00:22:29.494 00000100 56 fe 4f 8d 4f c5 9d ea e6 c6 a6 99 a6 75 19 31 V.O.O........u.1 00:22:29.494 00000110 29 e7 df 72 29 64 a1 1f 5c 2b 6d f7 9b eb 85 8f )..r)d..\+m..... 00:22:29.494 00000120 84 78 fd 8a fa e8 31 49 93 fa 95 4e 03 a7 d4 21 .x....1I...N...! 00:22:29.494 00000130 66 14 16 91 6f 4e b0 5c d2 90 d7 a3 0e 04 50 51 f...oN.\......PQ 00:22:29.494 00000140 04 ad ea 06 14 fd ad 59 cf 54 87 87 2b 3c 3c 1e .......Y.T..+<<. 00:22:29.494 00000150 bd 8e 28 b9 4f b4 0d 70 df ff e9 0d 14 4a 2c 8b ..(.O..p.....J,. 00:22:29.494 00000160 69 e6 c9 70 41 f1 4b e4 69 cb 9e c9 bd 48 bf 1e i..pA.K.i....H.. 00:22:29.494 00000170 a7 d2 e7 b8 a3 03 b1 e5 28 66 01 c0 ae 68 9c 71 ........(f...h.q 00:22:29.494 [2024-09-27 13:27:13.908691] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key4, hash=2, dhgroup=2, seq=3775755241, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.494 [2024-09-27 13:27:13.908960] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.494 [2024-09-27 13:27:13.916726] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.494 [2024-09-27 13:27:13.917128] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.494 [2024-09-27 13:27:13.917322] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.494 [2024-09-27 13:27:13.969085] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.494 [2024-09-27 13:27:13.969324] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.494 [2024-09-27 13:27:13.969555] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.494 [2024-09-27 13:27:13.969832] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.494 [2024-09-27 13:27:13.970117] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.494 ctrlr pubkey: 00:22:29.494 00000000 67 4a 59 ca a1 46 3d 09 9f 6c bd 58 50 9d cc 8f gJY..F=..l.XP... 00:22:29.494 00000010 fb 14 ab 88 93 c0 29 e2 af 07 1c f5 b6 08 61 a1 ......).......a. 00:22:29.494 00000020 e3 36 f9 42 5f 01 71 0c db 62 7d ac 32 ee 67 cc .6.B_.q..b}.2.g. 00:22:29.494 00000030 ac 78 4b ce c8 58 47 61 92 92 78 44 de 5a bc 4c .xK..XGa..xD.Z.L 00:22:29.494 00000040 4d 1f a3 1d e5 0e ba be 5f 3b 69 3e 32 ea 17 9e M......._;i>2... 00:22:29.494 00000050 b1 1f 77 12 65 e7 87 80 65 94 cc 30 eb be c3 a5 ..w.e...e..0.... 00:22:29.494 00000060 92 b6 18 6d 59 bb 38 ee df 2c 67 87 38 c6 85 d3 ...mY.8..,g.8... 00:22:29.494 00000070 f0 b8 b8 4c 79 a8 50 f3 c1 be c1 82 8a 8f 99 f0 ...Ly.P......... 00:22:29.494 00000080 2f 14 82 c4 6c 48 ff b5 59 cb 7d 62 78 2a 7f a7 /...lH..Y.}bx*.. 00:22:29.494 00000090 61 b3 f5 1f 9b f0 d6 45 71 30 40 12 e3 f2 1c 8d a......Eq0@..... 00:22:29.494 000000a0 23 62 d6 5f 65 ad 10 09 8b b0 42 4c 35 48 2a dc #b._e.....BL5H*. 00:22:29.494 000000b0 f2 e6 a6 64 20 b2 e7 05 ec c0 ff 88 36 0d b7 ca ...d .......6... 00:22:29.494 000000c0 6d f1 b6 72 b9 34 5e bc fd 17 db 3f 23 f1 c3 3c m..r.4^....?#..< 00:22:29.494 000000d0 2c 76 a1 89 ec 3a eb 40 e4 83 54 e5 ef b7 8f cd ,v...:.@..T..... 00:22:29.494 000000e0 ee ce a5 02 3e 33 7b 4e ac 23 99 3d 65 ed 72 8b ....>3{N.#.=e.r. 00:22:29.494 000000f0 70 b1 38 57 cf 12 94 e8 b5 b5 91 20 60 a4 5e 2b p.8W....... `.^+ 00:22:29.494 00000100 1d a9 99 89 b5 9b 73 00 46 9b 77 05 f8 6c 25 37 ......s.F.w..l%7 00:22:29.494 00000110 5a 18 8f 88 45 bc 2f cb ac 39 ba 10 4a a8 f4 f0 Z...E./..9..J... 00:22:29.494 00000120 fa c3 4a 75 f5 e9 14 14 46 2d 07 dd e8 cc 29 1d ..Ju....F-....). 00:22:29.494 00000130 b5 25 75 2d a0 9a 5c da 7c bd b8 20 06 4e 8e 0f .%u-..\.|.. .N.. 00:22:29.494 00000140 78 07 43 1f 6c c2 86 c4 ec 2c f8 72 ca bf af a1 x.C.l....,.r.... 00:22:29.494 00000150 17 52 e3 f3 f9 6a ff 88 dd 6c 3e 88 25 26 09 67 .R...j...l>.%&.g 00:22:29.494 00000160 2d cf 23 57 cf 43 f9 8f 02 60 f3 c8 19 3d b3 49 -.#W.C...`...=.I 00:22:29.494 00000170 6f 48 16 49 de 55 ab 54 68 a4 b8 b0 0f 01 87 82 oH.I.U.Th....... 00:22:29.494 host pubkey: 00:22:29.494 00000000 d3 c9 b5 92 7d 0f 0b ff f9 10 ed 29 98 0c 48 78 ....}......)..Hx 00:22:29.494 00000010 3f 16 12 92 d0 28 ab 4f 70 35 b8 fb ac 08 39 f9 ?....(.Op5....9. 00:22:29.494 00000020 83 e7 06 4c 20 e9 f8 51 cb 83 55 aa ce ef 39 82 ...L ..Q..U...9. 00:22:29.494 00000030 44 53 37 d2 de 25 eb 66 27 a4 d5 53 dd e4 73 31 DS7..%.f'..S..s1 00:22:29.494 00000040 04 f6 4b 04 ea fc 7d 4f 98 eb 4e 8f 9c c5 a2 46 ..K...}O..N....F 00:22:29.494 00000050 04 55 d7 25 61 4c 60 17 85 12 19 e8 1c 1e 5d 98 .U.%aL`.......]. 00:22:29.494 00000060 33 3c 0a c8 d3 63 bb 5b 6c 00 71 a1 80 68 0f 16 3<...c.[l.q..h.. 00:22:29.494 00000070 f9 a2 93 11 9b 95 3f bf 35 4a eb af 19 94 ed 61 ......?.5J.....a 00:22:29.494 00000080 27 f0 47 b1 6c ab c6 d7 0f 46 e2 a4 f9 00 99 47 '.G.l....F.....G 00:22:29.494 00000090 bd 92 ff dd 31 f1 57 c2 a8 45 ce 9f 44 91 a4 82 ....1.W..E..D... 00:22:29.494 000000a0 00 43 ba 0f 7d e4 86 a9 2f f1 ee e3 21 4b 9d 3c .C..}.../...!K.< 00:22:29.494 000000b0 88 95 22 11 a1 bb f2 cc f8 8e 47 ed 60 5c b8 c1 ..".......G.`\.. 00:22:29.494 000000c0 be b3 37 59 1c 7a fe 6b 81 08 eb 0e 0a 66 36 d2 ..7Y.z.k.....f6. 00:22:29.494 000000d0 9a 9e 3f e5 e4 79 81 89 47 9e d3 9b 2f ef 20 18 ..?..y..G.../. . 00:22:29.494 000000e0 37 2e dc 1d cd 14 39 16 4d 73 8a 2e d8 e7 01 30 7.....9.Ms.....0 00:22:29.494 000000f0 34 e9 c3 5b 36 2a 1c 98 6a f3 37 0c ed 98 cc f5 4..[6*..j.7..... 00:22:29.494 00000100 81 74 45 55 c7 4f 17 d4 63 3b e9 ca 2c 7a b6 ca .tEU.O..c;..,z.. 00:22:29.494 00000110 f4 4f 4b 6c bf 03 0f 86 3d c7 69 e1 79 e6 89 d6 .OKl....=.i.y... 00:22:29.494 00000120 27 18 5e 95 d1 4e 02 6b 37 ca 36 f1 6c e5 3d 29 '.^..N.k7.6.l.=) 00:22:29.494 00000130 29 28 09 64 fc 7c 96 7d 4d 54 d4 75 d7 d8 55 0c )(.d.|.}MT.u..U. 00:22:29.494 00000140 41 e4 8a 4e 2f 2e b0 a1 33 aa e8 8a bb 21 81 20 A..N/...3....!. 00:22:29.494 00000150 61 90 e8 0d 65 a1 f4 80 6c b2 72 24 f6 8c ae a3 a...e...l.r$.... 00:22:29.494 00000160 35 42 55 4d 79 43 a7 20 d4 0e 3a 66 bc 33 50 70 5BUMyC. ..:f.3Pp 00:22:29.494 00000170 84 96 ae 9a 46 87 c5 87 ee ff 8b 01 ee bd dc eb ....F........... 00:22:29.494 dh secret: 00:22:29.494 00000000 90 b8 e1 14 5d 2e b7 ac 54 97 9c 46 be ca c2 b0 ....]...T..F.... 00:22:29.494 00000010 2f ea 84 65 ea ad 09 03 88 5c 91 a3 7d 01 d8 c1 /..e.....\..}... 00:22:29.494 00000020 99 6d 1b 91 53 41 22 af 57 22 9f 08 3c 99 ab ca .m..SA".W"..<... 00:22:29.494 00000030 53 43 32 02 55 61 97 32 19 8f 0c 1c 93 19 43 2c SC2.Ua.2......C, 00:22:29.494 00000040 ad c1 c3 35 7b 8b 59 aa fa 05 f8 b2 cd b5 ed 92 ...5{.Y......... 00:22:29.494 00000050 3f 51 2b 75 bf 5f 91 d6 30 cf 9b 88 a8 40 d8 de ?Q+u._..0....@.. 00:22:29.494 00000060 dd 9b 35 56 a1 5e 51 b6 45 e6 ab 4c da a8 41 72 ..5V.^Q.E..L..Ar 00:22:29.494 00000070 94 9c 05 8f d7 f5 ee 21 40 81 cc 68 2f 7f 35 c5 .......!@..h/.5. 00:22:29.494 00000080 2f 72 e9 73 6b 42 42 27 3a ba 23 16 8b 3e 5c 6c /r.skBB':.#..>\l 00:22:29.494 00000090 b4 52 d7 df 84 c1 6c 53 3d b0 65 13 bb e4 57 df .R....lS=.e...W. 00:22:29.494 000000a0 59 a2 4c 07 65 7a 48 43 0f ab d6 6d 15 68 d1 32 Y.L.ezHC...m.h.2 00:22:29.494 000000b0 68 36 13 7d a1 09 a1 a3 7b f8 b4 e7 46 a7 18 aa h6.}....{...F... 00:22:29.494 000000c0 3c 6f 0b b0 a5 87 e2 e4 ba 96 85 01 67 71 d9 64 .E 00:22:29.495 000000f0 a1 41 bb fd 7c 24 20 9a b0 44 1c bf fe 39 c2 00 .A..|$ ..D...9.. 00:22:29.495 00000100 76 3e 10 b0 74 76 0a ad d4 40 66 24 7f 62 26 2c v>..tv...@f$.b&, 00:22:29.495 00000110 fa 4e 2e 88 2a cd 58 f7 8e b5 08 f8 24 39 80 14 .N..*.X.....$9.. 00:22:29.495 00000120 78 24 8a 53 4b 5d d4 2f 88 3f bd b7 ee d2 a6 9e x$.SK]./.?...... 00:22:29.495 00000130 df 09 52 7a bf 3a fe 82 52 78 a3 56 56 af 53 12 ..Rz.:..Rx.VV.S. 00:22:29.495 00000140 e2 2a 85 50 f6 68 bb 50 d3 0f d5 a4 fb 9c 91 32 .*.P.h.P.......2 00:22:29.495 00000150 fd 37 f7 32 45 13 4c 2c d6 d4 0f 66 be b5 25 55 .7.2E.L,...f..%U 00:22:29.495 00000160 ea c8 a4 a7 f6 16 5a 47 f9 be 71 a5 ff 04 80 30 ......ZG..q....0 00:22:29.495 00000170 b4 58 6a 0f 69 f2 97 d4 fb 88 6c bb 9a de 6f ae .Xj.i.....l...o. 00:22:29.495 00000180 58 23 a2 dc 69 6f 18 04 79 a3 a4 29 75 3f 41 bc X#..io..y..)u?A. 00:22:29.495 00000190 c1 9b 42 f3 6b 09 07 29 c4 11 29 d7 1a 76 4b 24 ..B.k..)..)..vK$ 00:22:29.495 000001a0 e7 b8 f9 04 a8 5a 97 54 3c 1b b3 22 b9 8f f8 9d .....Z.T<..".... 00:22:29.495 000001b0 ef a6 ba 6a df bc 5f 3a 41 2d 28 9e da 92 f0 ef ...j.._:A-(..... 00:22:29.495 000001c0 82 31 36 89 62 c5 95 fd 99 b5 ed 20 a0 38 2b b4 .16.b...... .8+. 00:22:29.495 000001d0 82 c9 e6 5b fb 3b 47 b2 b3 05 d3 c6 a9 20 05 1e ...[.;G...... .. 00:22:29.495 000001e0 8d 29 76 6f d1 7a cf 12 89 bf bd fa c1 84 1e 4d .)vo.z.........M 00:22:29.495 000001f0 06 b5 1c f5 99 35 69 be f1 e4 99 a3 00 62 7e 87 .....5i......b~. 00:22:29.495 host pubkey: 00:22:29.495 00000000 17 20 f1 65 db 3b 5d 4b c0 6d 39 5b cb 49 45 73 . .e.;]K.m9[.IEs 00:22:29.495 00000010 d8 34 49 b7 24 b3 da de 80 3b 40 96 79 d6 53 6c .4I.$....;@.y.Sl 00:22:29.495 00000020 36 3d f1 53 4f d5 60 62 e4 29 c3 3c 43 1c 2a 48 6=.SO.`b.)...EL.... 00:22:29.495 000000c0 8a 6f 36 a2 bd 4c 60 79 1d e8 4f c7 d9 ea 14 19 .o6..L`y..O..... 00:22:29.495 000000d0 03 fa 10 47 84 c0 56 f2 ae a8 f0 4a 0c f5 88 b4 ...G..V....J.... 00:22:29.495 000000e0 6e 33 d5 a4 4a 10 fb 4a fa 5b 68 19 1a 62 bd 9c n3..J..J.[h..b.. 00:22:29.495 000000f0 87 44 9f b8 ba 41 7d 76 24 92 ae ad c0 78 78 43 .D...A}v$....xxC 00:22:29.495 00000100 76 df df 20 a0 b3 bf 83 45 5d 22 62 80 aa c4 19 v.. ....E]"b.... 00:22:29.495 00000110 e2 2d c0 1a f2 29 34 f7 a1 9a 36 f1 57 aa ea e0 .-...)4...6.W... 00:22:29.495 00000120 e2 6c bf 96 a5 22 f0 6b cc 25 1c a5 ef 8f 2d ec .l...".k.%....-. 00:22:29.495 00000130 6a 97 a9 1d 93 3a 23 f3 12 3a 38 e0 7e 51 7f 07 j....:#..:8.~Q.. 00:22:29.495 00000140 71 82 5e be 87 f2 ff 8f 0d ea 01 55 05 a2 01 b8 q.^........U.... 00:22:29.495 00000150 16 de 84 cf 08 cc 02 5d 9a 6b 3b 6a cc da b6 9e .......].k;j.... 00:22:29.495 00000160 b4 e8 9a 66 b1 78 4f 21 db 46 c9 88 33 ff 37 c8 ...f.xO!.F..3.7. 00:22:29.495 00000170 07 71 0f ac f4 a9 8e 19 23 ee cb 88 1a c4 21 e0 .q......#.....!. 00:22:29.495 00000180 0b 4b a5 17 93 73 f6 48 cf dd 39 f7 da 16 7a cf .K...s.H..9...z. 00:22:29.495 00000190 0e b0 2f 48 67 d8 61 eb 11 16 d3 1c c1 13 91 d2 ../Hg.a......... 00:22:29.495 000001a0 26 bd c2 05 13 6a d9 19 7e 7b c2 6a 80 d5 6a 44 &....j..~{.j..jD 00:22:29.495 000001b0 71 67 5b c9 49 a8 82 5a fa f3 9d 52 87 58 88 09 qg[.I..Z...R.X.. 00:22:29.495 000001c0 9e 60 ff 3d d3 6c 6f 7b 6d ea f3 79 fa ee 81 86 .`.=.lo{m..y.... 00:22:29.495 000001d0 11 89 10 1c c9 07 9a ab de af 2f 5f 98 88 7e 27 ........../_..~' 00:22:29.495 000001e0 b5 96 1d 5c b0 12 5d 4d cd c9 2f 65 7d 9b ed 8b ...\..]M../e}... 00:22:29.495 000001f0 31 ad 57 99 94 34 5f ca 48 24 f4 99 33 cd cf 01 1.W..4_.H$..3... 00:22:29.495 [2024-09-27 13:27:14.130145] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key0, hash=2, dhgroup=3, seq=3775755243, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.495 [2024-09-27 13:27:14.130477] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.495 [2024-09-27 13:27:14.157792] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.495 [2024-09-27 13:27:14.158155] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.495 [2024-09-27 13:27:14.158358] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.495 [2024-09-27 13:27:14.158622] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.495 [2024-09-27 13:27:14.209398] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.495 [2024-09-27 13:27:14.209751] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.495 [2024-09-27 13:27:14.209952] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 3 (ffdhe4096) 00:22:29.495 [2024-09-27 13:27:14.210132] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.495 [2024-09-27 13:27:14.210453] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.495 ctrlr pubkey: 00:22:29.495 00000000 a8 6d 38 7d ea 7d 91 c3 10 5f ad ed 9a b2 c5 0f .m8}.}..._...... 00:22:29.495 00000010 a3 00 89 00 b1 57 12 37 e9 c2 e9 e5 77 a8 4a b4 .....W.7....w.J. 00:22:29.495 00000020 35 8c 2e 60 93 8c 40 1b 2b cf c1 07 a1 ad 53 61 5..`..@.+.....Sa 00:22:29.495 00000030 0b 7e bb c6 ae b4 e4 75 89 73 ac c0 73 01 d2 b4 .~.....u.s..s... 00:22:29.495 00000040 68 12 a8 b5 a6 ef 05 ea 2d de 1e 4b 44 01 2a f4 h.......-..KD.*. 00:22:29.495 00000050 30 5c d2 ff b0 40 d6 c0 ad 4d dc 89 e1 3a ab 3b 0\...@...M...:.; 00:22:29.495 00000060 e8 79 83 4e 32 f0 95 05 14 42 a5 4b 0c 24 94 ca .y.N2....B.K.$.. 00:22:29.495 00000070 b0 c9 63 86 8c 3f 12 bc fe fc 1d 6d 6b 53 60 5b ..c..?.....mkS`[ 00:22:29.495 00000080 c4 ab b7 42 20 85 a4 e0 55 b1 3f cb 6e f2 e0 23 ...B ...U.?.n..# 00:22:29.495 00000090 26 7a 81 4a e7 a9 96 be b8 dc e2 dc ab c5 0f 3a &z.J...........: 00:22:29.495 000000a0 c8 52 6b 7a 28 68 30 1f b7 c5 8f 4e f9 94 44 a4 .Rkz(h0....N..D. 00:22:29.495 000000b0 c2 11 46 9a a9 be 88 5c 7d 0e 8f 89 66 23 78 5a ..F....\}...f#xZ 00:22:29.495 000000c0 72 3a 3f 3b 95 3a ee e9 49 af 35 c6 99 68 99 65 r:?;.:..I.5..h.e 00:22:29.495 000000d0 09 3f fe 52 17 59 37 6e 83 24 d0 4e 06 03 cc 88 .?.R.Y7n.$.N.... 00:22:29.495 000000e0 b5 79 6e 52 dd 1b 54 ca ad f4 9b 98 97 3e a2 45 .ynR..T......>.E 00:22:29.495 000000f0 a1 41 bb fd 7c 24 20 9a b0 44 1c bf fe 39 c2 00 .A..|$ ..D...9.. 00:22:29.495 00000100 76 3e 10 b0 74 76 0a ad d4 40 66 24 7f 62 26 2c v>..tv...@f$.b&, 00:22:29.495 00000110 fa 4e 2e 88 2a cd 58 f7 8e b5 08 f8 24 39 80 14 .N..*.X.....$9.. 00:22:29.495 00000120 78 24 8a 53 4b 5d d4 2f 88 3f bd b7 ee d2 a6 9e x$.SK]./.?...... 00:22:29.495 00000130 df 09 52 7a bf 3a fe 82 52 78 a3 56 56 af 53 12 ..Rz.:..Rx.VV.S. 00:22:29.495 00000140 e2 2a 85 50 f6 68 bb 50 d3 0f d5 a4 fb 9c 91 32 .*.P.h.P.......2 00:22:29.495 00000150 fd 37 f7 32 45 13 4c 2c d6 d4 0f 66 be b5 25 55 .7.2E.L,...f..%U 00:22:29.495 00000160 ea c8 a4 a7 f6 16 5a 47 f9 be 71 a5 ff 04 80 30 ......ZG..q....0 00:22:29.495 00000170 b4 58 6a 0f 69 f2 97 d4 fb 88 6c bb 9a de 6f ae .Xj.i.....l...o. 00:22:29.495 00000180 58 23 a2 dc 69 6f 18 04 79 a3 a4 29 75 3f 41 bc X#..io..y..)u?A. 00:22:29.495 00000190 c1 9b 42 f3 6b 09 07 29 c4 11 29 d7 1a 76 4b 24 ..B.k..)..)..vK$ 00:22:29.495 000001a0 e7 b8 f9 04 a8 5a 97 54 3c 1b b3 22 b9 8f f8 9d .....Z.T<..".... 00:22:29.495 000001b0 ef a6 ba 6a df bc 5f 3a 41 2d 28 9e da 92 f0 ef ...j.._:A-(..... 00:22:29.496 000001c0 82 31 36 89 62 c5 95 fd 99 b5 ed 20 a0 38 2b b4 .16.b...... .8+. 00:22:29.496 000001d0 82 c9 e6 5b fb 3b 47 b2 b3 05 d3 c6 a9 20 05 1e ...[.;G...... .. 00:22:29.496 000001e0 8d 29 76 6f d1 7a cf 12 89 bf bd fa c1 84 1e 4d .)vo.z.........M 00:22:29.496 000001f0 06 b5 1c f5 99 35 69 be f1 e4 99 a3 00 62 7e 87 .....5i......b~. 00:22:29.496 host pubkey: 00:22:29.496 00000000 2a f3 08 16 2a a2 9b da 4c 35 0a a1 45 97 c4 1b *...*...L5..E... 00:22:29.496 00000010 9f 92 72 7e f5 a9 c7 8d 91 8f ef 8d 98 97 ca c5 ..r~............ 00:22:29.496 00000020 03 b8 9f 9e b8 65 ae fd 37 ab 67 80 5e 6b 02 39 .....e..7.g.^k.9 00:22:29.496 00000030 8b 29 b2 1b c2 f7 7f ac 12 cb 09 ae 39 25 eb eb .)..........9%.. 00:22:29.496 00000040 2a a7 a2 78 63 a0 e5 1c 8b 64 34 69 f7 27 95 2e *..xc....d4i.'.. 00:22:29.496 00000050 10 67 df e6 b0 15 86 e8 07 c9 30 76 f7 7c a1 0a .g........0v.|.. 00:22:29.496 00000060 8e b2 63 e7 d6 10 16 09 df 8c c9 ce b4 74 b8 5d ..c..........t.] 00:22:29.496 00000070 26 79 48 e0 e4 f6 8c 70 76 95 82 e9 be 54 2e f3 &yH....pv....T.. 00:22:29.496 00000080 7b fc ca 6f 8a 23 f4 b5 09 27 5e 7a e1 ca 65 c2 {..o.#...'^z..e. 00:22:29.496 00000090 00 9b 54 d9 a6 97 40 2c 5b 02 61 5d eb 6f f1 12 ..T...@,[.a].o.. 00:22:29.496 000000a0 07 72 56 76 5f 47 69 2e 68 12 91 0c 0e 81 38 50 .rVv_Gi.h.....8P 00:22:29.496 000000b0 58 93 aa 95 ce d2 b9 b7 d6 14 6b 05 d4 20 c5 e4 X.........k.. .. 00:22:29.496 000000c0 69 99 64 73 c4 73 b7 b0 13 4a f8 26 db cb 17 ff i.ds.s...J.&.... 00:22:29.496 000000d0 9d 03 9e 5e 62 d2 a4 cb 94 7f a9 c1 6d 9d ae 01 ...^b.......m... 00:22:29.496 000000e0 8f 82 66 0b 3d cb 54 61 98 c4 24 e7 f9 d5 28 63 ..f.=.Ta..$...(c 00:22:29.496 000000f0 ef 39 07 8a 6f 51 95 68 bf d1 51 8e 86 5c 38 71 .9..oQ.h..Q..\8q 00:22:29.496 00000100 8a 4e 95 fa a6 31 0f 7f 39 2e 60 f5 24 0f 96 b8 .N...1..9.`.$... 00:22:29.496 00000110 8f 7d 00 59 84 44 81 cf 76 17 b2 cd 83 fb 4f 29 .}.Y.D..v.....O) 00:22:29.496 00000120 4b fb 97 5f 20 ce f9 49 a2 f4 8d 43 b5 e6 61 a3 K.._ ..I...C..a. 00:22:29.496 00000130 43 c9 c3 2e 25 22 4c fb 8f e3 10 02 bb 94 8b d7 C...%"L......... 00:22:29.496 00000140 68 1d bb 58 df 74 9d 45 68 18 a7 93 37 99 e6 0f h..X.t.Eh...7... 00:22:29.496 00000150 7f b3 25 48 82 b0 b1 e2 a0 b6 61 db d0 8d 74 d9 ..%H......a...t. 00:22:29.496 00000160 44 f9 26 4d 8b 84 76 f2 86 48 34 a9 af 0c 51 4b D.&M..v..H4...QK 00:22:29.496 00000170 34 d1 43 5d d5 03 d7 96 5b bd 98 97 3a 91 63 5a 4.C]....[...:.cZ 00:22:29.496 00000180 5f c7 05 19 17 27 29 91 4b 83 d6 91 5a 5f 7a 93 _....').K...Z_z. 00:22:29.496 00000190 40 0d b4 2e 79 d1 b4 92 0a 43 26 ab 2e e5 f2 23 @...y....C&....# 00:22:29.496 000001a0 78 ca 7d 18 4d 1c ae 0b c6 66 1b b5 75 09 71 80 x.}.M....f..u.q. 00:22:29.496 000001b0 1b d0 fd 0b 19 94 b2 ef 9d ab 0b 5a 81 85 3e 70 ...........Z..>p 00:22:29.496 000001c0 54 40 41 b3 48 1f 9f fa aa 59 1e 4d af 4f 7c 44 T@A.H....Y.M.O|D 00:22:29.496 000001d0 c4 37 3f ec b0 7e 85 3e 46 c5 4f 57 24 63 8e b1 .7?..~.>F.OW$c.. 00:22:29.496 000001e0 41 2a 44 3c f6 d7 41 df 30 7f 3d 4b e9 ae 15 5f A*D<..A.0.=K..._ 00:22:29.496 000001f0 fb e3 d2 18 11 87 3e 80 b0 e6 34 f2 a7 d6 f1 f6 ......>...4..... 00:22:29.496 dh secret: 00:22:29.496 00000000 55 a7 56 9e 4d a1 36 f7 5f 72 30 e5 11 f8 31 27 U.V.M.6._r0...1' 00:22:29.496 00000010 23 e9 7a af 1d 9b 44 49 94 ca 7b 22 f7 07 fd ac #.z...DI..{".... 00:22:29.496 00000020 d5 8d 3b 2b 60 55 de c6 8b 3c c6 5e 89 a4 0a fe ..;+`U...<.^.... 00:22:29.496 00000030 54 7d 4f 58 39 e6 08 85 18 ac 9c ba 0e f8 18 52 T}OX9..........R 00:22:29.496 00000040 99 b9 25 ae be b6 ca a8 61 b2 39 78 28 f9 32 44 ..%.....a.9x(.2D 00:22:29.496 00000050 82 66 30 ba 99 b3 19 02 a8 09 46 9c b0 14 7e e2 .f0.......F...~. 00:22:29.496 00000060 97 b4 3b ad 5a a6 30 72 22 c7 e3 34 09 f4 53 dd ..;.Z.0r"..4..S. 00:22:29.496 00000070 34 73 e7 90 53 c1 21 0b 80 19 41 f5 70 66 f6 c9 4s..S.!...A.pf.. 00:22:29.496 00000080 63 dc 5e c0 f6 03 5f 32 79 21 43 a2 e1 f0 10 d5 c.^..._2y!C..... 00:22:29.496 00000090 64 f8 3c 96 41 9f a9 73 cd 67 7a 85 c9 1c 64 1f d.<.A..s.gz...d. 00:22:29.496 000000a0 5e 31 ce e0 e0 99 2e 10 99 13 a7 47 3b b5 db bf ^1.........G;... 00:22:29.496 000000b0 5c a8 c8 f4 f4 85 c0 1f 2b 8c 00 a2 31 8c e4 d0 \.......+...1... 00:22:29.496 000000c0 6d b7 dc fe f7 7a 4c 0a e6 3f b1 4f f3 8f e4 ee m....zL..?.O.... 00:22:29.496 000000d0 4b 5c bb 1c 7c 57 e2 04 a2 24 68 b2 2d d0 91 be K\..|W...$h.-... 00:22:29.496 000000e0 95 6b 07 71 4e 80 87 53 b6 e5 52 79 82 30 29 a6 .k.qN..S..Ry.0). 00:22:29.496 000000f0 85 32 41 90 1a d0 7c b8 45 9c 4c e1 75 40 b1 b8 .2A...|.E.L.u@.. 00:22:29.496 00000100 ec c4 01 89 27 c5 fd ee 95 c5 4c 14 13 0d 56 cf ....'.....L...V. 00:22:29.496 00000110 23 a4 e5 67 e2 b3 bb 53 c2 c8 83 ba d5 51 e5 f1 #..g...S.....Q.. 00:22:29.496 00000120 f5 41 85 55 ff 8f 2d 90 50 a6 e4 94 a8 04 7e 46 .A.U..-.P.....~F 00:22:29.496 00000130 05 70 17 6c 19 ca e1 a8 1e aa 00 5d 4a b7 fc 5f .p.l.......]J.._ 00:22:29.496 00000140 0d aa 52 59 24 77 92 ec 99 0a 30 c1 ec bf 0a 25 ..RY$w....0....% 00:22:29.496 00000150 96 75 cf 0a 86 fe 1c e6 44 a0 6a 22 8a c1 27 40 .u......D.j"..'@ 00:22:29.496 00000160 28 a8 27 f7 52 c1 d8 c4 18 1e a6 a4 4b 48 34 cd (.'.R.......KH4. 00:22:29.496 00000170 c6 0d 16 67 2e 33 60 2e 12 76 29 98 c9 45 9f 98 ...g.3`..v)..E.. 00:22:29.496 00000180 74 a2 a7 b5 3d cf 4d 3a 91 61 66 61 14 32 ad 98 t...=.M:.afa.2.. 00:22:29.496 00000190 3d e7 54 cd bb 50 76 ee 28 fb 81 77 43 f3 6a f5 =.T..Pv.(..wC.j. 00:22:29.496 000001a0 c5 4d 4e aa 55 97 96 20 e0 a7 92 25 7d 8c 4f 2c .MN.U.. ...%}.O, 00:22:29.496 000001b0 6b 5f e9 57 13 bf b8 92 88 3d af 9e cf 37 29 27 k_.W.....=...7)' 00:22:29.496 000001c0 ed d0 35 ec 54 41 cd f0 fb fb 69 43 6a e1 2c 76 ..5.TA....iCj.,v 00:22:29.496 000001d0 58 d1 46 95 2a 40 8f 70 f7 f9 01 70 60 02 8e cd X.F.*@.p...p`... 00:22:29.496 000001e0 5e 6d dc a5 78 b8 ab 91 01 e4 44 7a 9a 73 2d 11 ^m..x.....Dz.s-. 00:22:29.496 000001f0 66 ea a5 5f 08 91 4a ee 5b f9 b8 ec 17 7f be 7d f.._..J.[......} 00:22:29.496 [2024-09-27 13:27:14.238397] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key0, hash=2, dhgroup=3, seq=3775755244, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.496 [2024-09-27 13:27:14.238783] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.496 [2024-09-27 13:27:14.263228] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.496 [2024-09-27 13:27:14.263730] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.496 [2024-09-27 13:27:14.263950] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.496 [2024-09-27 13:27:14.264220] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.496 [2024-09-27 13:27:14.380638] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.496 [2024-09-27 13:27:14.380813] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.496 [2024-09-27 13:27:14.380979] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 3 (ffdhe4096) 00:22:29.496 [2024-09-27 13:27:14.381761] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.496 [2024-09-27 13:27:14.382096] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.496 ctrlr pubkey: 00:22:29.496 00000000 29 f7 24 b5 85 8b 3f 05 ed d1 1f ee 39 ab 36 23 ).$...?.....9.6# 00:22:29.496 00000010 08 33 91 b3 f4 bb 40 d2 2e 1a 97 1d 24 cc 6a 76 .3....@.....$.jv 00:22:29.496 00000020 c3 e4 2d fb 46 24 0c a4 f7 66 d7 79 c0 f8 04 71 ..-.F$...f.y...q 00:22:29.496 00000030 db 11 6a f0 9b 33 e8 98 a5 c0 1d 88 d6 a0 45 a5 ..j..3........E. 00:22:29.496 00000040 96 a9 bc a6 39 1c 2e 0c 56 2d 11 6f ef 88 43 20 ....9...V-.o..C 00:22:29.496 00000050 27 19 63 28 1c c5 1f 45 d6 e9 fd 06 09 ca f1 f2 '.c(...E........ 00:22:29.496 00000060 c3 e2 af f7 4b fc 77 5a c8 82 19 1c 8c 5d f6 03 ....K.wZ.....].. 00:22:29.496 00000070 b0 6c a6 f7 80 fd 38 1a 23 9f 7d 69 ec 0a d9 bf .l....8.#.}i.... 00:22:29.496 00000080 30 4f ad c8 e1 90 38 01 e1 c5 31 e8 40 83 13 85 0O....8...1.@... 00:22:29.496 00000090 08 e9 90 f5 20 84 bf 80 16 2f d9 2a 81 2e a9 d1 .... ..../.*.... 00:22:29.496 000000a0 d3 98 96 3b 82 f9 c1 5a 40 2e 63 7d 50 bd 9c 0b ...;...Z@.c}P... 00:22:29.496 000000b0 9a 7a 3d d9 4a ec eb 2c 52 93 55 5f fd 24 b3 9e .z=.J..,R.U_.$.. 00:22:29.496 000000c0 1a fb ef ff 37 35 ee d0 a3 10 0c 46 a1 a0 8d 22 ....75.....F..." 00:22:29.496 000000d0 09 05 7a 74 3a 45 bb 29 ec 82 70 f3 6e a8 73 db ..zt:E.)..p.n.s. 00:22:29.496 000000e0 7b cd 2a 14 a6 16 d8 81 23 59 27 4c 6d 94 e4 29 {.*.....#Y'Lm..) 00:22:29.496 000000f0 4c e0 cd a9 f1 f3 a6 32 7c 24 65 de 0c 8f 80 a7 L......2|$e..... 00:22:29.496 00000100 a6 20 4f ec 21 26 20 a8 ab d3 4c 48 48 70 95 0a . O.!& ...LHHp.. 00:22:29.496 00000110 ea 28 92 63 7d fa 56 60 36 6a 39 6e f5 55 46 9d .(.c}.V`6j9n.UF. 00:22:29.496 00000120 e7 23 05 91 68 58 ae 78 82 3f 46 a0 c7 77 e6 71 .#..hX.x.?F..w.q 00:22:29.496 00000130 8b 22 71 1f 46 a9 d6 1e 22 bc 2e 27 9b ce b6 36 ."q.F..."..'...6 00:22:29.496 00000140 06 22 1a fb 84 dc 38 c9 9b 92 1c de 82 42 4d bb ."....8......BM. 00:22:29.496 00000150 b8 1c f7 71 97 a0 b4 12 26 8f 40 08 25 56 62 f6 ...q....&.@.%Vb. 00:22:29.496 00000160 1a 55 db ca a6 5a ed 26 04 ac 44 a8 24 6c 58 30 .U...Z.&..D.$lX0 00:22:29.496 00000170 54 db df 4c 62 38 dd 45 39 e1 51 5e 59 45 a1 ee T..Lb8.E9.Q^YE.. 00:22:29.496 00000180 51 99 f2 9f 14 23 de e4 63 d6 31 a0 12 91 a2 71 Q....#..c.1....q 00:22:29.496 00000190 3a 4c 67 6d 5d c0 ac 7b a8 cb 06 e0 a4 00 4b ba :Lgm]..{......K. 00:22:29.496 000001a0 06 e6 95 fd ea b9 5b 1f d1 8c 2d 42 f5 d7 5e 3a ......[...-B..^: 00:22:29.496 000001b0 05 10 56 9e f5 ef 87 cd 23 30 84 ae a3 dd 9d fb ..V.....#0...... 00:22:29.496 000001c0 6b 66 54 14 32 14 19 38 8c 90 85 97 7d b0 c0 80 kfT.2..8....}... 00:22:29.496 000001d0 6e d3 ec 48 1d 67 67 80 c0 b9 2b 60 9d 42 e0 f9 n..H.gg...+`.B.. 00:22:29.496 000001e0 59 8d 7e 7c d6 6c f5 49 75 b5 b8 fb 5e ad 2c ea Y.~|.l.Iu...^.,. 00:22:29.496 000001f0 c9 e3 fd 63 e0 f6 92 d6 46 c6 7f 90 3f b4 90 1c ...c....F...?... 00:22:29.496 host pubkey: 00:22:29.496 00000000 18 ab 5c ee 10 d5 55 a7 55 f8 2f d4 98 5c 47 1c ..\...U.U./..\G. 00:22:29.496 00000010 8f 9d 48 cb 8c 7e a5 5a 13 31 76 61 67 90 ee d4 ..H..~.Z.1vag... 00:22:29.496 00000020 7b f2 cc e5 40 d5 c9 46 1b 1e 16 be b2 46 4d b8 {...@..F.....FM. 00:22:29.496 00000030 63 2b d1 e6 e1 c8 86 a0 4f 8b 3f eb d2 01 ae 08 c+......O.?..... 00:22:29.496 00000040 39 df 47 09 76 d6 32 35 98 b4 4a 14 a4 f9 36 54 9.G.v.25..J...6T 00:22:29.496 00000050 df f8 71 7d a2 c8 3f c6 b7 de 11 d2 92 f5 b3 2c ..q}..?........, 00:22:29.496 00000060 99 c7 19 9a 4e a7 38 0a 72 2c 90 3f b2 f8 36 49 ....N.8.r,.?..6I 00:22:29.496 00000070 7a 86 03 00 b0 69 b0 ee b5 ad d0 46 99 4c aa 64 z....i.....F.L.d 00:22:29.496 00000080 1a 8e c6 f5 6d 7c 8d 85 96 3b 73 1c ef 4e 6e e8 ....m|...;s..Nn. 00:22:29.496 00000090 f2 ee d8 dd 70 61 9d 2d bd ef 69 f0 b7 06 5a 76 ....pa.-..i...Zv 00:22:29.496 000000a0 a7 b8 bd 6c 10 49 65 b6 fb 25 ce 95 f8 c2 3e 5b ...l.Ie..%....>[ 00:22:29.496 000000b0 90 b9 1b dd 65 96 10 1e 95 9c 64 31 fc af 58 c3 ....e.....d1..X. 00:22:29.496 000000c0 96 f3 98 e9 f7 2c 00 9c 62 6e b5 68 9b fc 37 2b .....,..bn.h..7+ 00:22:29.496 000000d0 49 53 bb b1 35 ae 64 37 ad 87 99 94 2d a7 fc ca IS..5.d7....-... 00:22:29.496 000000e0 1c af a1 24 a6 5d 17 7b ab 26 80 aa c2 ef b5 d3 ...$.].{.&...... 00:22:29.496 000000f0 e3 15 c8 51 84 4b 05 14 d9 6a 88 5b 53 30 fe 9b ...Q.K...j.[S0.. 00:22:29.496 00000100 c0 12 0b 17 00 4a 0e ee 14 5b 55 f7 7d 2a 4a 1f .....J...[U.}*J. 00:22:29.496 00000110 8d 2c 75 d2 67 40 f7 e6 19 58 da 1d 1b 2a 52 5f .,u.g@...X...*R_ 00:22:29.496 00000120 c5 eb b6 99 7d 80 d6 0a fb 52 78 49 23 11 82 60 ....}....RxI#..` 00:22:29.496 00000130 92 db 1e 9d 73 21 54 49 c3 d3 0b 1a 86 66 b2 ba ....s!TI.....f.. 00:22:29.497 00000140 69 3d 76 0c 1a c5 05 97 7a 07 18 06 ba 57 2c 46 i=v.....z....W,F 00:22:29.497 00000150 25 25 65 f6 1f 89 6b c5 ac 83 a4 9f af 74 14 3c %%e...k......t.< 00:22:29.497 00000160 e3 fa 96 c4 6a 94 39 19 60 35 b7 6d f9 44 75 06 ....j.9.`5.m.Du. 00:22:29.497 00000170 92 23 2e 5a 9b 73 22 e8 11 31 61 2f a4 5a 79 60 .#.Z.s"..1a/.Zy` 00:22:29.497 00000180 cf 17 e3 56 b1 9c a7 28 19 c5 19 bf 59 04 5f ee ...V...(....Y._. 00:22:29.497 00000190 e2 06 74 dd 79 d9 44 0a 3c 19 32 76 9f d9 40 73 ..t.y.D.<.2v..@s 00:22:29.497 000001a0 19 bc 47 3f 60 93 cd de 85 f4 e5 9c d7 51 fd ed ..G?`........Q.. 00:22:29.497 000001b0 24 76 4d 73 ef 98 bf a1 b2 be 9c 0c 8c bf 7a 77 $vMs..........zw 00:22:29.497 000001c0 90 bf 71 f0 43 aa f8 0f 61 f2 5f f3 50 11 98 6e ..q.C...a._.P..n 00:22:29.497 000001d0 63 4a 70 23 bf e0 32 d0 de e4 ee 7e 8a fc 59 4a cJp#..2....~..YJ 00:22:29.497 000001e0 45 de e6 78 a2 1b f9 89 d1 c7 cf b9 31 da cd fc E..x........1... 00:22:29.497 000001f0 c6 16 29 a3 ca 79 1e 70 e5 ed 4e a0 2f e5 3b ed ..)..y.p..N./.;. 00:22:29.497 dh secret: 00:22:29.497 00000000 b3 7e db 88 96 f4 4f d0 61 14 a1 0d d9 58 96 72 .~....O.a....X.r 00:22:29.497 00000010 ab 0d 79 35 05 aa db 99 ca 64 3b f1 f3 f9 71 d5 ..y5.....d;...q. 00:22:29.497 00000020 18 8d aa bb 17 81 4f e9 09 38 d1 91 91 b4 b7 7b ......O..8.....{ 00:22:29.497 00000030 ae d3 36 e2 49 e8 64 aa 0d 75 51 a4 90 41 54 4b ..6.I.d..uQ..ATK 00:22:29.497 00000040 9e 95 20 c7 c2 42 d5 6d cd 10 4a 82 52 29 22 49 .. ..B.m..J.R)"I 00:22:29.497 00000050 62 5d 17 ed 5a 0a e5 f8 cf 35 f6 6d a6 8c 42 1b b]..Z....5.m..B. 00:22:29.497 00000060 69 f5 91 43 9b 9e 43 80 bc 83 f1 08 64 3a b3 2f i..C..C.....d:./ 00:22:29.497 00000070 3e 8c e3 b4 5c d9 56 48 ec c2 94 fa b2 5a 80 f4 >...\.VH.....Z.. 00:22:29.497 00000080 0f 03 f0 15 00 83 a1 79 a1 b2 9f c5 29 dd d4 cd .......y....)... 00:22:29.497 00000090 7c 09 f1 c1 ab 1f b4 5e 84 8a 88 8b 2a d1 17 fe |......^....*... 00:22:29.497 000000a0 9b 73 e6 86 c0 34 60 8b c8 b7 20 eb 1c 88 a7 1d .s...4`... ..... 00:22:29.497 000000b0 91 3c 98 7a 24 fc 9f cc ef ec c2 19 52 85 00 67 .<.z$.......R..g 00:22:29.497 000000c0 cb 4b 73 b1 e5 8f 5c 0c 88 d5 1a 4d b7 a0 29 cf .Ks...\....M..). 00:22:29.497 000000d0 30 fa 78 31 76 8d 08 12 45 1f 7d 5c 38 e1 c6 76 0.x1v...E.}\8..v 00:22:29.497 000000e0 b0 29 09 68 db 1c e4 df 85 39 66 b5 7e 54 f1 37 .).h.....9f.~T.7 00:22:29.497 000000f0 7e 4f 18 51 48 07 20 c7 54 38 13 f9 b7 aa c4 78 ~O.QH. .T8.....x 00:22:29.497 00000100 4d cb 2e 51 28 b8 c2 e2 8c 55 4e 8e 7b 46 1c 7e M..Q(....UN.{F.~ 00:22:29.497 00000110 1e ac f9 78 7a e4 ef 38 47 fd be e4 ef c0 df 0d ...xz..8G....... 00:22:29.497 00000120 94 c6 87 da 0d e9 6c d4 bf f4 96 e5 da d9 ea be ......l......... 00:22:29.497 00000130 42 ed 22 01 cb 0b 36 ee 88 a0 7e e5 94 14 2a 93 B."...6...~...*. 00:22:29.497 00000140 4e df c8 dc a0 a1 48 20 c1 25 db 75 ab 4b f8 bd N.....H .%.u.K.. 00:22:29.497 00000150 e5 0c b8 b0 aa ac 16 5f b7 f7 4d e9 a4 05 b8 ed ......._..M..... 00:22:29.497 00000160 e2 19 1c 30 a1 22 53 cc 9e f3 4b 82 b0 14 38 3a ...0."S...K...8: 00:22:29.497 00000170 ab 34 38 63 bb 8a 8b d1 27 93 ee b4 b2 26 18 99 .48c....'....&.. 00:22:29.497 00000180 48 e5 e1 31 e9 c3 8f 56 2a c4 66 14 cd 4b 4a 35 H..1...V*.f..KJ5 00:22:29.497 00000190 90 b8 0c ef 73 15 86 77 c8 24 1e ff 4c ec ad fe ....s..w.$..L... 00:22:29.497 000001a0 cc d4 04 de bd 95 da df a9 ea e7 d4 42 4a 16 e1 ............BJ.. 00:22:29.497 000001b0 88 35 57 c3 0c e7 a2 38 4c 0c ec 72 4d cb 4e ab .5W....8L..rM.N. 00:22:29.497 000001c0 61 41 76 a8 81 a9 8a 75 ad 64 4b 59 b0 90 5b d9 aAv....u.dKY..[. 00:22:29.497 000001d0 f4 fb 14 80 eb 60 38 3f 60 47 a7 0d 50 04 fd 78 .....`8?`G..P..x 00:22:29.497 000001e0 72 2c 91 75 25 3c d5 94 d5 05 01 b8 9b 1e 01 79 r,.u%<.........y 00:22:29.497 000001f0 12 14 f2 68 a6 e0 bd 22 f6 8a e8 fa 1d 66 4d b7 ...h...".....fM. 00:22:29.497 [2024-09-27 13:27:14.409763] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=2, dhgroup=3, seq=3775755245, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.497 [2024-09-27 13:27:14.410059] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.497 [2024-09-27 13:27:14.434336] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.497 [2024-09-27 13:27:14.434817] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.497 [2024-09-27 13:27:14.435050] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.497 [2024-09-27 13:27:14.435278] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.497 [2024-09-27 13:27:14.486998] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.497 [2024-09-27 13:27:14.487260] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.497 [2024-09-27 13:27:14.487551] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 3 (ffdhe4096) 00:22:29.497 [2024-09-27 13:27:14.487767] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.497 [2024-09-27 13:27:14.488058] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.497 ctrlr pubkey: 00:22:29.497 00000000 29 f7 24 b5 85 8b 3f 05 ed d1 1f ee 39 ab 36 23 ).$...?.....9.6# 00:22:29.497 00000010 08 33 91 b3 f4 bb 40 d2 2e 1a 97 1d 24 cc 6a 76 .3....@.....$.jv 00:22:29.497 00000020 c3 e4 2d fb 46 24 0c a4 f7 66 d7 79 c0 f8 04 71 ..-.F$...f.y...q 00:22:29.497 00000030 db 11 6a f0 9b 33 e8 98 a5 c0 1d 88 d6 a0 45 a5 ..j..3........E. 00:22:29.497 00000040 96 a9 bc a6 39 1c 2e 0c 56 2d 11 6f ef 88 43 20 ....9...V-.o..C 00:22:29.497 00000050 27 19 63 28 1c c5 1f 45 d6 e9 fd 06 09 ca f1 f2 '.c(...E........ 00:22:29.497 00000060 c3 e2 af f7 4b fc 77 5a c8 82 19 1c 8c 5d f6 03 ....K.wZ.....].. 00:22:29.497 00000070 b0 6c a6 f7 80 fd 38 1a 23 9f 7d 69 ec 0a d9 bf .l....8.#.}i.... 00:22:29.497 00000080 30 4f ad c8 e1 90 38 01 e1 c5 31 e8 40 83 13 85 0O....8...1.@... 00:22:29.497 00000090 08 e9 90 f5 20 84 bf 80 16 2f d9 2a 81 2e a9 d1 .... ..../.*.... 00:22:29.497 000000a0 d3 98 96 3b 82 f9 c1 5a 40 2e 63 7d 50 bd 9c 0b ...;...Z@.c}P... 00:22:29.497 000000b0 9a 7a 3d d9 4a ec eb 2c 52 93 55 5f fd 24 b3 9e .z=.J..,R.U_.$.. 00:22:29.497 000000c0 1a fb ef ff 37 35 ee d0 a3 10 0c 46 a1 a0 8d 22 ....75.....F..." 00:22:29.497 000000d0 09 05 7a 74 3a 45 bb 29 ec 82 70 f3 6e a8 73 db ..zt:E.)..p.n.s. 00:22:29.497 000000e0 7b cd 2a 14 a6 16 d8 81 23 59 27 4c 6d 94 e4 29 {.*.....#Y'Lm..) 00:22:29.497 000000f0 4c e0 cd a9 f1 f3 a6 32 7c 24 65 de 0c 8f 80 a7 L......2|$e..... 00:22:29.497 00000100 a6 20 4f ec 21 26 20 a8 ab d3 4c 48 48 70 95 0a . O.!& ...LHHp.. 00:22:29.497 00000110 ea 28 92 63 7d fa 56 60 36 6a 39 6e f5 55 46 9d .(.c}.V`6j9n.UF. 00:22:29.497 00000120 e7 23 05 91 68 58 ae 78 82 3f 46 a0 c7 77 e6 71 .#..hX.x.?F..w.q 00:22:29.497 00000130 8b 22 71 1f 46 a9 d6 1e 22 bc 2e 27 9b ce b6 36 ."q.F..."..'...6 00:22:29.497 00000140 06 22 1a fb 84 dc 38 c9 9b 92 1c de 82 42 4d bb ."....8......BM. 00:22:29.497 00000150 b8 1c f7 71 97 a0 b4 12 26 8f 40 08 25 56 62 f6 ...q....&.@.%Vb. 00:22:29.497 00000160 1a 55 db ca a6 5a ed 26 04 ac 44 a8 24 6c 58 30 .U...Z.&..D.$lX0 00:22:29.497 00000170 54 db df 4c 62 38 dd 45 39 e1 51 5e 59 45 a1 ee T..Lb8.E9.Q^YE.. 00:22:29.497 00000180 51 99 f2 9f 14 23 de e4 63 d6 31 a0 12 91 a2 71 Q....#..c.1....q 00:22:29.497 00000190 3a 4c 67 6d 5d c0 ac 7b a8 cb 06 e0 a4 00 4b ba :Lgm]..{......K. 00:22:29.497 000001a0 06 e6 95 fd ea b9 5b 1f d1 8c 2d 42 f5 d7 5e 3a ......[...-B..^: 00:22:29.497 000001b0 05 10 56 9e f5 ef 87 cd 23 30 84 ae a3 dd 9d fb ..V.....#0...... 00:22:29.497 000001c0 6b 66 54 14 32 14 19 38 8c 90 85 97 7d b0 c0 80 kfT.2..8....}... 00:22:29.497 000001d0 6e d3 ec 48 1d 67 67 80 c0 b9 2b 60 9d 42 e0 f9 n..H.gg...+`.B.. 00:22:29.497 000001e0 59 8d 7e 7c d6 6c f5 49 75 b5 b8 fb 5e ad 2c ea Y.~|.l.Iu...^.,. 00:22:29.497 000001f0 c9 e3 fd 63 e0 f6 92 d6 46 c6 7f 90 3f b4 90 1c ...c....F...?... 00:22:29.497 host pubkey: 00:22:29.497 00000000 c9 9b 55 44 0b 9a c8 28 53 57 17 df ab 9a 37 5d ..UD...(SW....7] 00:22:29.497 00000010 00 ab 60 47 3c 5e 45 dd 70 ef 76 46 97 0c 3f a3 ..`G<^E.p.vF..?. 00:22:29.497 00000020 8e 29 fb e1 4e 7b 65 a7 90 05 84 35 00 fe b3 3a .)..N{e....5...: 00:22:29.497 00000030 2f ab 79 83 b6 c0 f7 33 46 6e 3a 26 fd 38 e5 b3 /.y....3Fn:&.8.. 00:22:29.497 00000040 b1 2b 26 c7 00 16 6d 76 31 fd eb 46 cb f4 f5 71 .+&...mv1..F...q 00:22:29.497 00000050 f0 3b b8 3a 25 fd 61 05 ba a1 59 97 15 ab 7d c7 .;.:%.a...Y...}. 00:22:29.497 00000060 9f 12 bb ba ba ca b2 58 ca 7a 8e 0c 69 fc 1c eb .......X.z..i... 00:22:29.497 00000070 0b 30 83 c6 cb c6 89 f8 05 5d 07 e0 35 0b 23 aa .0.......]..5.#. 00:22:29.497 00000080 78 82 63 e7 78 36 48 2b a7 42 90 c4 16 63 35 ed x.c.x6H+.B...c5. 00:22:29.497 00000090 03 56 ca 5c bc 74 12 5c 3e 02 ee e2 de 3f 03 63 .V.\.t.\>....?.c 00:22:29.497 000000a0 3c 92 91 0b 7c c3 72 ba ab b3 53 d5 e4 3c 9c ba <...|.r...S..<.. 00:22:29.497 000000b0 96 0a 2c b9 3d 9d 26 16 d8 6c 01 d1 34 65 a4 83 ..,.=.&..l..4e.. 00:22:29.497 000000c0 00 c0 56 6d 12 e2 a3 ff 59 dc ac 6e a0 c9 0e 5e ..Vm....Y..n...^ 00:22:29.497 000000d0 2d a1 5f 7a 31 06 c6 8e 0b 15 b6 1d 8c 37 c1 a6 -._z1........7.. 00:22:29.497 000000e0 c9 cd 2a e0 8b d1 78 48 1d a1 91 9b 4a a0 8a 21 ..*...xH....J..! 00:22:29.497 000000f0 43 d9 1d 37 db f5 29 ef b8 ba b4 1f 18 06 e4 f3 C..7..)......... 00:22:29.497 00000100 d6 06 0e f1 30 36 32 11 97 19 aa 01 04 bc 29 c9 ....062.......). 00:22:29.497 00000110 f5 e1 69 37 ec 11 b4 aa dd 15 c6 be 65 6f dd 24 ..i7........eo.$ 00:22:29.497 00000120 ba 27 be 96 75 c4 b2 34 b3 b3 e7 a1 f3 b3 2b ba .'..u..4......+. 00:22:29.497 00000130 e7 e3 a8 33 bd 4f 08 10 fe 20 e1 f1 24 36 b6 f3 ...3.O... ..$6.. 00:22:29.497 00000140 aa 98 84 45 9d 8a 4c f1 ba 9b 11 3c da a2 f7 6b ...E..L....<...k 00:22:29.497 00000150 9c ea 54 2f 9e 28 08 47 48 02 6c 86 60 cb 19 a6 ..T/.(.GH.l.`... 00:22:29.497 00000160 ac f9 13 be 4e 15 cd 5f 38 50 25 14 1e be 7e 19 ....N.._8P%...~. 00:22:29.497 00000170 0c 68 ea 99 25 e6 75 87 07 be 83 86 62 98 3d c5 .h..%.u.....b.=. 00:22:29.497 00000180 4b 4e 1f 50 14 aa 03 4c a5 0f 7c 6d b4 18 0f b9 KN.P...L..|m.... 00:22:29.497 00000190 4e db 61 20 98 45 80 43 a6 4d a6 60 34 f1 27 83 N.a .E.C.M.`4.'. 00:22:29.497 000001a0 a0 35 99 70 5b c0 f0 be fa cf fd 65 ac 7c b4 57 .5.p[......e.|.W 00:22:29.498 000001b0 04 02 98 ab 74 15 61 47 13 6d 5e 0d 5f 2a be f8 ....t.aG.m^._*.. 00:22:29.498 000001c0 98 ef b4 34 48 1d 56 15 c8 66 16 46 68 77 5a 0d ...4H.V..f.FhwZ. 00:22:29.498 000001d0 ab 42 b3 56 67 82 9c 99 c9 37 17 68 ae 73 21 fd .B.Vg....7.h.s!. 00:22:29.498 000001e0 27 01 7e db a6 f5 50 ab df 8e ae 85 97 7d 0e 7e '.~...P......}.~ 00:22:29.498 000001f0 d2 31 9e f0 60 f3 a4 0f 67 e0 1b cc ee 8e 31 c2 .1..`...g.....1. 00:22:29.498 dh secret: 00:22:29.498 00000000 13 25 f6 30 73 26 49 77 f5 cb 42 f8 49 2b 5e 9e .%.0s&Iw..B.I+^. 00:22:29.498 00000010 e2 c0 d3 48 4d 1e 2e ac e1 8b b1 6c ed 19 1c 5d ...HM......l...] 00:22:29.498 00000020 cd 9b 8a 8a cf 2b 9f 99 80 00 91 08 48 5a ce 57 .....+......HZ.W 00:22:29.498 00000030 1d 5a ef 8a 8a e6 8d ed 25 ef 97 ba 59 27 25 d6 .Z......%...Y'%. 00:22:29.498 00000040 6d ce 2c 1c 80 51 ea 0e db 61 c5 17 57 f2 51 93 m.,..Q...a..W.Q. 00:22:29.498 00000050 0e c5 59 89 3b 63 f8 78 41 ba e3 9a 36 82 bd 09 ..Y.;c.xA...6... 00:22:29.498 00000060 2b eb 9b 40 0b 0e f9 ea 81 f7 5f b9 56 b1 f9 de +..@......_.V... 00:22:29.498 00000070 b8 24 1e d3 2b 56 93 03 8e 42 77 1d f3 f7 8e 6a .$..+V...Bw....j 00:22:29.498 00000080 c4 5a 75 43 97 94 41 54 e6 7f 90 7b dd 96 72 7d .ZuC..AT...{..r} 00:22:29.498 00000090 d0 cf 64 ca 9d 8e 25 d5 48 2c c2 b6 bd 9b 78 0a ..d...%.H,....x. 00:22:29.498 000000a0 ce fc 1a 10 fe 79 46 a0 3b 0e e9 6e fc c3 aa 3e .....yF.;..n...> 00:22:29.498 000000b0 ed e2 41 c9 bc f5 52 03 d4 ed 2c 42 e1 02 c7 87 ..A...R...,B.... 00:22:29.498 000000c0 98 81 68 35 27 f9 7a cb 8c 81 9c fd ac b0 03 aa ..h5'.z......... 00:22:29.498 000000d0 1d 80 6c fc 42 8b 89 ff a0 ed ca 3f f2 bb 73 28 ..l.B......?..s( 00:22:29.498 000000e0 c5 cb 7f d2 68 2e 25 da 0c ff c6 d9 11 64 83 a0 ....h.%......d.. 00:22:29.498 000000f0 73 70 4e e0 d5 12 a4 09 b9 2a 97 6d 8e a3 1a 2c spN......*.m..., 00:22:29.498 00000100 13 1d b9 f2 51 85 51 ed 7c 10 b4 c5 ed 5a 25 b1 ....Q.Q.|....Z%. 00:22:29.498 00000110 4e 16 52 53 f5 62 a6 2a b6 f2 fb 23 da 8c 1d da N.RS.b.*...#.... 00:22:29.498 00000120 c1 a7 d7 2a c2 1a 08 c0 e2 3b aa cc 60 57 8b 0c ...*.....;..`W.. 00:22:29.498 00000130 a3 33 d0 be b4 76 bc 3b b3 a0 5d cf 66 d1 8a f8 .3...v.;..].f... 00:22:29.498 00000140 05 76 0e 52 11 08 fd 72 88 02 ae 64 09 f2 66 7f .v.R...r...d..f. 00:22:29.498 00000150 41 d8 3e d0 a1 75 a2 d2 12 00 6c 7d 94 e3 23 32 A.>..u....l}..#2 00:22:29.498 00000160 ff 3b 36 a2 7b c7 e5 1d 5a 17 5e 4b f4 ef 0b 6b .;6.{...Z.^K...k 00:22:29.498 00000170 8c 92 cc 29 92 ff 49 7e c2 58 d3 cf c2 1e c6 70 ...)..I~.X.....p 00:22:29.498 00000180 6c ac 67 4f 77 b0 c7 94 55 aa 27 94 20 4f 07 07 l.gOw...U.'. O.. 00:22:29.498 00000190 31 22 a3 e1 d6 93 5b 00 bf 8b 4a ed c7 7d be f6 1"....[...J..}.. 00:22:29.498 000001a0 fe 80 de 2f 20 44 3f e2 84 a2 6b b6 45 23 a7 8d .../ D?...k.E#.. 00:22:29.498 000001b0 79 40 18 2b 34 35 1e de 3d 83 53 85 91 a6 a5 c0 y@.+45..=.S..... 00:22:29.498 000001c0 ab fb 68 34 8e fe 28 5e 4b e6 12 58 99 51 62 20 ..h4..(^K..X.Qb 00:22:29.498 000001d0 64 ab f5 fa c6 56 3f c5 e4 99 1a c9 00 3c fd 11 d....V?......<.. 00:22:29.498 000001e0 8b 84 e9 5f 3b 3d 9c aa eb 49 d4 30 c8 05 53 90 ..._;=...I.0..S. 00:22:29.498 000001f0 04 6c 35 68 5a c0 a3 fe 6c 3f e7 50 25 c2 7a 40 .l5hZ...l?.P%.z@ 00:22:29.498 [2024-09-27 13:27:14.515343] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key1, hash=2, dhgroup=3, seq=3775755246, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.498 [2024-09-27 13:27:14.515651] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.498 [2024-09-27 13:27:14.540402] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.498 [2024-09-27 13:27:14.540826] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.498 [2024-09-27 13:27:14.541120] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.498 [2024-09-27 13:27:14.541407] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.498 [2024-09-27 13:27:14.657787] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.498 [2024-09-27 13:27:14.657998] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.498 [2024-09-27 13:27:14.658118] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 3 (ffdhe4096) 00:22:29.498 [2024-09-27 13:27:14.658315] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.498 [2024-09-27 13:27:14.658560] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.498 ctrlr pubkey: 00:22:29.498 00000000 62 20 97 be 84 d7 2b 9d 48 c4 50 e3 67 a7 59 3b b ....+.H.P.g.Y; 00:22:29.498 00000010 49 06 52 6c 13 d6 76 20 e4 a5 50 c3 82 88 5a 47 I.Rl..v ..P...ZG 00:22:29.498 00000020 34 ae a3 1e 27 8f db e6 65 9b 7c 1f c2 1e ea 4b 4...'...e.|....K 00:22:29.498 00000030 0e 57 b0 6a bd 9c 0b e4 4d b6 74 8f 2f 69 f1 59 .W.j....M.t./i.Y 00:22:29.498 00000040 08 7c 52 20 ca 94 b8 48 4a 65 cc de de b8 fa 48 .|R ...HJe.....H 00:22:29.498 00000050 29 ad 74 18 62 6c 1a 22 0b f7 b0 e1 37 b9 2e c5 ).t.bl."....7... 00:22:29.498 00000060 1b 79 fd e6 4b 2c 20 ac 23 d9 45 26 64 23 fd ea .y..K, .#.E&d#.. 00:22:29.498 00000070 9f c6 3d 00 a5 f8 28 aa da 3f 55 96 c4 05 57 94 ..=...(..?U...W. 00:22:29.498 00000080 d9 9c 6d 47 60 fa f9 d5 bb 5e a1 3b 50 c1 f9 0b ..mG`....^.;P... 00:22:29.498 00000090 9c 23 a4 bd 28 0f cd cd 08 62 1a 6b 83 9f ce 11 .#..(....b.k.... 00:22:29.498 000000a0 13 71 cf e1 20 c2 ae b7 15 62 d9 89 2b 88 98 a9 .q.. ....b..+... 00:22:29.498 000000b0 a7 9e c6 a9 71 23 0d 78 d1 9b c5 68 a9 c7 96 34 ....q#.x...h...4 00:22:29.498 000000c0 93 10 ac 23 e9 24 41 15 8c 87 95 36 77 ee ba 1c ...#.$A....6w... 00:22:29.498 000000d0 16 0f 08 d8 3a 01 ba 31 4e 4d 03 9a 5d 64 af 67 ....:..1NM..]d.g 00:22:29.498 000000e0 ee ff ef 98 87 68 fa 21 c9 86 58 0a cc 00 d0 c5 .....h.!..X..... 00:22:29.498 000000f0 e0 f2 9d 8a b1 c0 b0 38 68 5a 3a 9f a3 31 c7 44 .......8hZ:..1.D 00:22:29.498 00000100 44 33 64 3f b4 5b 61 82 92 a1 fb a7 a8 d1 b1 85 D3d?.[a......... 00:22:29.498 00000110 d7 14 be 0c 96 53 0c fb 4d c8 04 7b 8c 81 56 74 .....S..M..{..Vt 00:22:29.498 00000120 06 72 37 fd 80 d2 9b 13 fd 17 52 41 bf 28 3b 19 .r7.......RA.(;. 00:22:29.498 00000130 5d e7 26 2c 10 0c 6c f7 b0 9e 0b 1e 98 ca 02 d6 ].&,..l......... 00:22:29.498 00000140 ec ca bd c6 2c 3d ba bc b5 1f 13 6c 84 2d 5d f7 ....,=.....l.-]. 00:22:29.498 00000150 bd 85 59 a3 7c 14 35 37 7b 64 a7 b4 70 1a 3e c2 ..Y.|.57{d..p.>. 00:22:29.498 00000160 5b 4e 88 a1 3c 62 9e ec 2a a9 0f 48 11 b2 dd e5 [N..F>.4.#gI.9.gY 00:22:29.498 00000010 24 6b d1 82 3c 6c 5b 3d e4 94 05 37 4a 05 aa 0b $k../..~^.U..6. 00:22:29.498 00000070 f3 0e 92 7e 25 df 58 e0 b5 8e bd b0 61 26 65 34 ...~%.X.....a&e4 00:22:29.498 00000080 cf 06 2b 57 af 6e cf 92 92 e0 6b e1 21 29 97 bd ..+W.n....k.!).. 00:22:29.498 00000090 03 49 53 99 3c 58 78 7a c2 64 b7 c0 18 f5 ec a8 .IS...6o 00:22:29.498 000001f0 52 f2 c9 a3 bb 85 3c 95 4f b7 f6 84 42 29 a3 ef R.....<.O...B).. 00:22:29.498 dh secret: 00:22:29.498 00000000 b2 a0 31 f7 7d b7 45 5e 1a 12 7b 52 77 e3 a8 55 ..1.}.E^..{Rw..U 00:22:29.498 00000010 1e fa e0 99 b6 f5 d9 7e 39 37 3e 4b 01 ac e4 f2 .......~97>K.... 00:22:29.498 00000020 a5 9b e1 12 10 a3 4f e0 4c d3 22 66 8c 03 f8 a3 ......O.L."f.... 00:22:29.498 00000030 d8 e4 8f 69 73 ee f9 17 d1 89 ad 55 8d 16 68 e2 ...is......U..h. 00:22:29.498 00000040 45 6c 9b ef ea 74 f8 18 c8 f9 b7 15 1d fa f1 30 El...t.........0 00:22:29.498 00000050 f4 03 48 dc d9 07 4b eb 11 a3 ad d4 fc 23 7b d9 ..H...K......#{. 00:22:29.498 00000060 cd 73 45 29 01 aa a9 a8 b9 1c 37 0d 25 37 d4 04 .sE)......7.%7.. 00:22:29.498 00000070 3f f2 ab dc bb ef 2b 85 c2 fc c9 45 a6 db 11 90 ?.....+....E.... 00:22:29.498 00000080 a1 e3 58 80 f4 c4 7a 1d 77 8f 4a cd 38 a8 6d 7d ..X...z.w.J.8.m} 00:22:29.498 00000090 5f 25 c6 9f 97 8b f1 2b 34 15 a8 8c 11 f5 38 cd _%.....+4.....8. 00:22:29.498 000000a0 b0 dc 9c 50 04 80 b1 41 b1 14 a1 5c a5 78 c1 2c ...P...A...\.x., 00:22:29.498 000000b0 89 d7 58 f8 cb 51 08 9d 63 5d 33 c8 c2 e2 4d b8 ..X..Q..c]3...M. 00:22:29.498 000000c0 77 4f a7 a6 95 b7 e6 98 8a 70 0f 06 bd c6 21 8d wO.......p....!. 00:22:29.498 000000d0 4b 4c 7e c3 dc 91 74 c5 90 d5 14 59 af 9e 12 7d KL~...t....Y...} 00:22:29.498 000000e0 ec b2 2e 5d fa 2e 6e cb 3c 71 34 ec 44 69 d7 e4 ...]..n..."@. 00:22:29.499 000001c0 df d7 a4 fc 78 68 8f 72 28 dc f8 12 0c 7d 3f 84 ....xh.r(....}?. 00:22:29.499 000001d0 d0 58 65 3c 34 ed b6 f0 6e 72 9d 1e 83 86 fc cb .Xe<4...nr...... 00:22:29.499 000001e0 e4 29 50 f6 08 f6 ac 15 83 b2 71 47 97 b5 06 a8 .)P.......qG.... 00:22:29.499 000001f0 35 cb 25 4e 75 4c dd c7 53 c0 60 f1 75 16 77 06 5.%NuL..S.`.u.w. 00:22:29.499 [2024-09-27 13:27:14.686297] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=2, dhgroup=3, seq=3775755247, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.499 [2024-09-27 13:27:14.686637] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.499 [2024-09-27 13:27:14.710953] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.499 [2024-09-27 13:27:14.711276] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.499 [2024-09-27 13:27:14.711542] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.499 [2024-09-27 13:27:14.711747] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.499 [2024-09-27 13:27:14.763528] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.499 [2024-09-27 13:27:14.763868] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.499 [2024-09-27 13:27:14.764015] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 3 (ffdhe4096) 00:22:29.499 [2024-09-27 13:27:14.764198] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.499 [2024-09-27 13:27:14.764496] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.499 ctrlr pubkey: 00:22:29.499 00000000 62 20 97 be 84 d7 2b 9d 48 c4 50 e3 67 a7 59 3b b ....+.H.P.g.Y; 00:22:29.499 00000010 49 06 52 6c 13 d6 76 20 e4 a5 50 c3 82 88 5a 47 I.Rl..v ..P...ZG 00:22:29.499 00000020 34 ae a3 1e 27 8f db e6 65 9b 7c 1f c2 1e ea 4b 4...'...e.|....K 00:22:29.499 00000030 0e 57 b0 6a bd 9c 0b e4 4d b6 74 8f 2f 69 f1 59 .W.j....M.t./i.Y 00:22:29.499 00000040 08 7c 52 20 ca 94 b8 48 4a 65 cc de de b8 fa 48 .|R ...HJe.....H 00:22:29.499 00000050 29 ad 74 18 62 6c 1a 22 0b f7 b0 e1 37 b9 2e c5 ).t.bl."....7... 00:22:29.499 00000060 1b 79 fd e6 4b 2c 20 ac 23 d9 45 26 64 23 fd ea .y..K, .#.E&d#.. 00:22:29.499 00000070 9f c6 3d 00 a5 f8 28 aa da 3f 55 96 c4 05 57 94 ..=...(..?U...W. 00:22:29.499 00000080 d9 9c 6d 47 60 fa f9 d5 bb 5e a1 3b 50 c1 f9 0b ..mG`....^.;P... 00:22:29.499 00000090 9c 23 a4 bd 28 0f cd cd 08 62 1a 6b 83 9f ce 11 .#..(....b.k.... 00:22:29.499 000000a0 13 71 cf e1 20 c2 ae b7 15 62 d9 89 2b 88 98 a9 .q.. ....b..+... 00:22:29.499 000000b0 a7 9e c6 a9 71 23 0d 78 d1 9b c5 68 a9 c7 96 34 ....q#.x...h...4 00:22:29.499 000000c0 93 10 ac 23 e9 24 41 15 8c 87 95 36 77 ee ba 1c ...#.$A....6w... 00:22:29.499 000000d0 16 0f 08 d8 3a 01 ba 31 4e 4d 03 9a 5d 64 af 67 ....:..1NM..]d.g 00:22:29.499 000000e0 ee ff ef 98 87 68 fa 21 c9 86 58 0a cc 00 d0 c5 .....h.!..X..... 00:22:29.499 000000f0 e0 f2 9d 8a b1 c0 b0 38 68 5a 3a 9f a3 31 c7 44 .......8hZ:..1.D 00:22:29.499 00000100 44 33 64 3f b4 5b 61 82 92 a1 fb a7 a8 d1 b1 85 D3d?.[a......... 00:22:29.499 00000110 d7 14 be 0c 96 53 0c fb 4d c8 04 7b 8c 81 56 74 .....S..M..{..Vt 00:22:29.499 00000120 06 72 37 fd 80 d2 9b 13 fd 17 52 41 bf 28 3b 19 .r7.......RA.(;. 00:22:29.499 00000130 5d e7 26 2c 10 0c 6c f7 b0 9e 0b 1e 98 ca 02 d6 ].&,..l......... 00:22:29.499 00000140 ec ca bd c6 2c 3d ba bc b5 1f 13 6c 84 2d 5d f7 ....,=.....l.-]. 00:22:29.499 00000150 bd 85 59 a3 7c 14 35 37 7b 64 a7 b4 70 1a 3e c2 ..Y.|.57{d..p.>. 00:22:29.499 00000160 5b 4e 88 a1 3c 62 9e ec 2a a9 0f 48 11 b2 dd e5 [N..}..v4.~.;.AX.lR 00:22:29.499 00000160 b2 3b 4a 37 7c 9e 97 fb 96 70 40 3b dc 23 49 3a .;J7|....p@;.#I: 00:22:29.499 00000170 ff a4 1e d1 ef f6 7b 48 5f 25 b0 89 14 d1 1c 0b ......{H_%...... 00:22:29.499 00000180 dc f3 8e e3 98 c4 70 59 ed c3 be cf 15 8d 8b 4c ......pY.......L 00:22:29.499 00000190 3b 84 a4 a4 37 ba 1e a1 e9 ad 7a a7 08 ca dd 54 ;...7.....z....T 00:22:29.499 000001a0 58 dd d1 76 9a a2 9a d8 41 0b f9 94 ea 0b 2a ae X..v....A.....*. 00:22:29.499 000001b0 4f 29 02 8f 90 9b 5c 62 b1 e4 99 ef 86 ac 38 22 O)....\b......8" 00:22:29.499 000001c0 32 e5 59 32 7c 99 6a e6 6c ce fa 86 ae 5d 12 c1 2.Y2|.j.l....].. 00:22:29.499 000001d0 93 7b 12 59 1a 69 4a b6 c7 70 53 ab 6b 5e cc c9 .{.Y.iJ..pS.k^.. 00:22:29.499 000001e0 ba d1 8b 52 68 ef 2f 4f c1 2e bf 3a b2 44 69 9a ...Rh./O...:.Di. 00:22:29.499 000001f0 7c 6b da 0d e7 1c 3a 3b 41 9d 32 21 c0 33 2a 19 |k....:;A.2!.3*. 00:22:29.499 dh secret: 00:22:29.499 00000000 11 0c 43 c8 b6 41 30 9f ca c8 26 ea e4 e0 90 03 ..C..A0...&..... 00:22:29.499 00000010 16 71 cb 23 2f 70 aa 58 62 ea 3e bf 25 af ac 1d .q.#/p.Xb.>.%... 00:22:29.499 00000020 70 53 30 71 c6 e6 54 5e a5 83 c2 af 28 13 1a 43 pS0q..T^....(..C 00:22:29.499 00000030 6d b5 04 6a 92 af 80 19 bb 6c cc 42 a5 12 30 fa m..j.....l.B..0. 00:22:29.499 00000040 54 25 13 16 44 78 7c 76 49 85 af 4e 10 68 79 86 T%..Dx|vI..N.hy. 00:22:29.499 00000050 21 24 32 13 bc 94 38 ac a6 d5 1a 23 3a f5 97 84 !$2...8....#:... 00:22:29.499 00000060 f7 5f d0 2d d8 e6 61 0f 0c aa 03 b6 7d 45 9e ce ._.-..a.....}E.. 00:22:29.499 00000070 18 b7 fa fa a0 29 84 13 32 ba 1a 84 a5 02 c8 db .....)..2....... 00:22:29.499 00000080 57 25 da da 9b a4 17 d0 49 59 8e 39 69 01 a9 f5 W%......IY.9i... 00:22:29.499 00000090 ab d8 3d 32 ce f2 37 a5 e9 9c d0 da e4 c1 03 3d ..=2..7........= 00:22:29.499 000000a0 0f 12 ec 1a b9 ea 72 a1 02 93 03 0d 31 70 83 1d ......r.....1p.. 00:22:29.499 000000b0 1f ea b6 63 8b 82 5f 26 b2 cf 46 2a 29 52 f3 f3 ...c.._&..F*)R.. 00:22:29.499 000000c0 5a 4a 1f 93 c9 8d af d1 4a 9a 4f a5 9c 89 e3 2f ZJ......J.O..../ 00:22:29.499 000000d0 60 79 32 39 75 a6 54 46 a9 91 cc 84 0b 90 e4 97 `y29u.TF........ 00:22:29.499 000000e0 eb 69 46 e5 4b 9d d1 92 fd a8 58 04 62 b3 80 7b .iF.K.....X.b..{ 00:22:29.499 000000f0 ad d0 c8 26 f1 68 1d 9b 4c 56 6d 31 52 5d f1 aa ...&.h..LVm1R].. 00:22:29.499 00000100 4e 05 4d 0a 7c a1 a3 0f e3 ec 3a 7a 8d ca b8 c8 N.M.|.....:z.... 00:22:29.499 00000110 04 d9 85 83 1b 22 d1 df cb 40 c5 7e 19 80 ca d4 ....."...@.~.... 00:22:29.499 00000120 cc a4 d1 a8 39 da 39 93 d6 ec 98 aa 94 c3 29 bb ....9.9.......). 00:22:29.499 00000130 fc c1 69 f5 7d 02 38 d8 c1 65 c5 88 8c f0 45 a7 ..i.}.8..e....E. 00:22:29.499 00000140 d6 ab 04 db 6e 4a 4d 52 27 58 d1 7a 05 a5 b9 2a ....nJMR'X.z...* 00:22:29.499 00000150 ad 27 b8 c6 ae a1 e7 cc 34 89 89 40 20 93 66 c5 .'......4..@ .f. 00:22:29.499 00000160 65 75 82 db 47 3e 78 11 6a 9d 0e e7 7a de 20 a4 eu..G>x.j...z. . 00:22:29.499 00000170 32 ed 4c f4 3a 5d dd d8 52 f0 6e f1 d3 5e 2e a2 2.L.:]..R.n..^.. 00:22:29.499 00000180 3d bf 54 89 92 b6 3e 9e e1 9e 23 eb de 91 85 56 =.T...>...#....V 00:22:29.499 00000190 82 5d 70 37 5c 88 d7 7a dd 40 40 3c 70 8b 1c f0 .]p7\..z.@@.f0...6:....U% 00:22:29.500 000000c0 3a 2a df 6b c0 eb fa 12 2c 0e 82 f8 1f 3a 8f d4 :*.k....,....:.. 00:22:29.500 000000d0 91 57 7d 04 c1 ab e4 e5 3d 08 6b 60 d3 af 96 7f .W}.....=.k`.... 00:22:29.500 000000e0 4f 7a ae 6b 7e 90 bf 75 82 dc 57 52 84 18 14 58 Oz.k~..u..WR...X 00:22:29.500 000000f0 dc 2d 4b f7 cc 29 74 31 d2 52 52 17 07 f4 1a ca .-K..)t1.RR..... 00:22:29.500 00000100 62 61 23 79 40 9c d0 99 25 1e c3 e4 e0 f9 4c ac ba#y@...%.....L. 00:22:29.500 00000110 70 97 a1 42 aa 17 66 31 ba 3d 37 6b a5 2d 21 17 p..B..f1.=7k.-!. 00:22:29.500 00000120 f6 a7 12 23 e8 c4 fd ff f5 18 09 3b 94 db 6b 43 ...#.......;..kC 00:22:29.500 00000130 70 bf 94 74 ae ef f6 f5 93 60 e5 26 bb b8 35 a5 p..t.....`.&..5. 00:22:29.500 00000140 44 40 96 b8 5b ab b9 cf bf d7 40 a1 3c 5e 4d 82 D@..[.....@.<^M. 00:22:29.500 00000150 7f 5f bb e6 25 73 2f 89 bd af aa 66 4f f5 a4 1a ._..%s/....fO... 00:22:29.500 00000160 4d 8f c3 5f 4d 53 0c a0 e7 6f c4 5d b6 ae 6b 71 M.._MS...o.]..kq 00:22:29.500 00000170 4a 08 dd 99 75 87 4b d6 dd 03 a3 21 89 08 ce 99 J...u.K....!.... 00:22:29.500 00000180 97 d8 86 34 4e e3 c6 f1 a7 1b a0 21 bf 73 bc a0 ...4N......!.s.. 00:22:29.500 00000190 2a a9 fb 94 9a d0 ae 3c 27 0a ac 6f 43 7a 07 3b *......<'..oCz.; 00:22:29.500 000001a0 dd d4 11 ba 87 f0 86 7b fa c1 eb 26 07 bf 9e f0 .......{...&.... 00:22:29.500 000001b0 fa 97 cd 86 68 3f 30 56 81 04 18 0e 5c df 60 f7 ....h?0V....\.`. 00:22:29.500 000001c0 ef e2 e3 1f 92 4d 5c a2 e2 eb a9 2a 1e c5 d1 8b .....M\....*.... 00:22:29.500 000001d0 44 b3 16 b9 7a f6 25 04 bf 49 29 85 a9 0b a2 dd D...z.%..I)..... 00:22:29.500 000001e0 ba 26 18 f1 68 d3 50 14 79 07 0a bb b2 41 d5 ad .&..h.P.y....A.. 00:22:29.500 000001f0 a8 14 99 c4 30 c0 70 91 5e 72 18 02 f5 5c 6f 38 ....0.p.^r...\o8 00:22:29.500 host pubkey: 00:22:29.500 00000000 ed b1 e9 77 1f e5 24 08 00 e7 ad 5f 8c 34 ff 3b ...w..$...._.4.; 00:22:29.500 00000010 6a a4 87 f2 8e 8f c5 47 e6 84 85 fe 81 2c f0 52 j......G.....,.R 00:22:29.500 00000020 94 68 6a 87 e1 d4 dd 2d 04 bf a5 d2 78 31 dc 37 .hj....-....x1.7 00:22:29.500 00000030 70 d1 b4 f5 59 84 dc 8f 4a 12 9f 2a e5 40 a5 68 p...Y...J..*.@.h 00:22:29.500 00000040 4d 20 a8 f7 90 48 c6 2a 3a 25 a3 e5 77 b8 48 fa M ...H.*:%..w.H. 00:22:29.500 00000050 a5 3a 94 8e ca 7e 50 bb 9c 45 f8 2f d4 45 c9 cd .:...~P..E./.E.. 00:22:29.500 00000060 ee 70 8e 67 a2 53 1b 71 09 4a 24 98 1e 45 83 54 .p.g.S.q.J$..E.T 00:22:29.500 00000070 cc e1 38 68 db 10 00 1e 98 df 8f 7c 41 f7 3b ac ..8h.......|A.;. 00:22:29.500 00000080 42 12 29 22 5e 7b 12 bd b2 93 05 b4 95 ab 5c ac B.)"^{........\. 00:22:29.500 00000090 34 e3 99 5b 2c e6 8f dd 6d 87 d1 d8 a9 92 e6 3c 4..[,...m......< 00:22:29.500 000000a0 69 fe 90 f2 db 9f d5 9b 2f 5c 5e bf 47 a9 9c b8 i......./\^.G... 00:22:29.500 000000b0 c0 a6 61 4f 98 27 8c 73 63 cd 6b 6e 46 42 c4 55 ..aO.'.sc.knFB.U 00:22:29.500 000000c0 6a 7b de 89 c3 2e 84 36 a4 38 1f 30 e8 1e b3 71 j{.....6.8.0...q 00:22:29.500 000000d0 ca b2 01 a5 50 22 23 36 31 5c 47 29 dd 06 6d b0 ....P"#61\G)..m. 00:22:29.500 000000e0 30 75 0a 8a 0b 5b 83 69 aa 13 27 ee 40 0a 46 ff 0u...[.i..'.@.F. 00:22:29.500 000000f0 ee cc bb 0f 3f da 98 b0 0b af 07 f9 67 3c 2e 02 ....?.......g<.. 00:22:29.500 00000100 b2 a4 80 c1 3c 55 91 5a ae 9d 3e 04 42 ce 4e 86 .....B.N. 00:22:29.500 00000110 21 4e 3b e9 7a f5 8e d8 1f ab 20 66 d2 0e 02 b7 !N;.z..... f.... 00:22:29.500 00000120 0d 56 3e b3 05 79 9b 79 71 45 e0 db f9 a8 85 ab .V>..y.yqE...... 00:22:29.500 00000130 80 3c 0a 71 dd 7e 84 8b e8 38 e6 49 6f d3 b4 36 .<.q.~...8.Io..6 00:22:29.500 00000140 dc cf af db 67 c3 eb df 7d e2 90 9e 1f f9 e5 ab ....g...}....... 00:22:29.500 00000150 fe ac 10 83 d3 dd 29 2a 59 13 48 52 47 ab 70 89 ......)*Y.HRG.p. 00:22:29.500 00000160 e3 fc 2c aa 34 9c 18 54 b7 a5 df 66 f5 fa 27 4b ..,.4..T...f..'K 00:22:29.500 00000170 84 cf e9 ac ba 33 6c bd a9 c1 60 06 33 1d 40 23 .....3l...`.3.@# 00:22:29.500 00000180 46 ae 79 7c 19 3c 73 e4 7b 61 7f 1a cc 21 00 85 F.y|..\..]&.rS$c.4. 00:22:29.500 000001c0 f1 9c b7 0f 0a 66 d9 de 80 e5 f9 41 f8 4d c0 7c .....f.....A.M.| 00:22:29.500 000001d0 d9 cd 48 4c 1c 8f 43 62 bb d7 98 99 63 f8 30 09 ..HL..Cb....c.0. 00:22:29.500 000001e0 0d 60 c3 2f d4 12 da 86 97 f7 8a cf 80 3a 11 2b .`./.........:.+ 00:22:29.500 000001f0 91 30 e9 3d c0 f4 62 66 ed b1 db 95 6f 50 91 00 .0.=..bf....oP.. 00:22:29.500 dh secret: 00:22:29.500 00000000 ab e9 6f 8b 14 93 10 7b 27 ed 0a 10 e8 49 31 d3 ..o....{'....I1. 00:22:29.500 00000010 bb 9b 6e 8d 70 4e cf fd cb be e2 e6 98 d7 c6 58 ..n.pN.........X 00:22:29.500 00000020 a6 9a ec 9d 1b fe 5f a0 2a 4b b2 af 50 a1 f2 16 ......_.*K..P... 00:22:29.500 00000030 ad 7a f5 c6 f7 36 13 8d a7 3b c5 a8 c0 3f f9 57 .z...6...;...?.W 00:22:29.500 00000040 0a e0 e0 97 18 45 82 6f 5a c4 c9 be fb e1 3a fa .....E.oZ.....:. 00:22:29.500 00000050 eb f0 1c cf c0 1c e0 3d 2f 6a a5 a6 df 68 76 c7 .......=/j...hv. 00:22:29.500 00000060 cf bb 3a bd 1d 70 33 6c 56 cd 06 36 34 9b 4f 72 ..:..p3lV..64.Or 00:22:29.500 00000070 0f 76 57 f0 12 b1 31 47 92 86 75 a4 79 3f 31 56 .vW...1G..u.y?1V 00:22:29.500 00000080 9f 23 5d 6d 61 ed 15 37 4e db 27 8b 40 11 34 b1 .#]ma..7N.'.@.4. 00:22:29.500 00000090 ed 3e a6 67 50 fc cb d3 be e3 ff ea 6d 0d 0e dc .>.gP.......m... 00:22:29.500 000000a0 fb 47 bc 13 56 1e f8 75 93 7c c5 a5 53 f4 58 3e .G..V..u.|..S.X> 00:22:29.500 000000b0 6a ea 56 e1 a2 0b d9 b7 43 84 2c bf 0d f0 67 1d j.V.....C.,...g. 00:22:29.500 000000c0 66 7a d2 ae b6 65 b4 97 58 fc c7 52 4f da ba 70 fz...e..X..RO..p 00:22:29.500 000000d0 99 be 92 32 b8 96 64 61 0d 28 ce 5d ef ba c7 84 ...2..da.(.].... 00:22:29.500 000000e0 80 32 74 8d 7a 1d c2 02 b3 02 88 2b d9 d1 6d 95 .2t.z......+..m. 00:22:29.500 000000f0 ed ad 67 3f db 15 55 f8 a1 41 ff a2 90 89 34 4f ..g?..U..A....4O 00:22:29.500 00000100 a1 19 06 c1 ea 96 e3 0b da 3c b5 d4 66 1f a0 91 .........<..f... 00:22:29.500 00000110 6f 15 c1 71 c4 19 33 74 76 3f 0c 2a 47 85 b5 f7 o..q..3tv?.*G... 00:22:29.500 00000120 49 b3 ca 51 a9 92 9c 72 f4 ce 54 f3 8e 33 7e b9 I..Q...r..T..3~. 00:22:29.500 00000130 87 2c bf 0a 57 f6 ca d7 e2 12 68 76 45 d3 c7 a6 .,..W.....hvE... 00:22:29.500 00000140 d0 3b cc de 01 39 7c 07 50 97 7a b5 6b c0 7f 3f .;...9|.P.z.k..? 00:22:29.500 00000150 74 2a c3 da 37 8c 58 06 57 9b 73 5e d2 85 8a e5 t*..7.X.W.s^.... 00:22:29.500 00000160 02 b8 2c f2 e0 a8 00 01 3a 6f cf cb 19 3e 0d b6 ..,.....:o...>.. 00:22:29.500 00000170 8c 1a ec 4a 6e 63 07 34 7e d9 a0 b7 0b 85 4d 2a ...Jnc.4~.....M* 00:22:29.500 00000180 a3 f5 30 70 bd 6a cc 1c ac 88 f4 ac 99 b2 c2 0d ..0p.j.......... 00:22:29.500 00000190 2b 2a d4 dc 7b 4b 71 4e c2 2d ee 12 bf b1 c3 af +*..{KqN.-...... 00:22:29.500 000001a0 6e ad 04 2e bb 02 fd f4 ad e0 5c 03 06 d9 cb 75 n.........\....u 00:22:29.500 000001b0 be 1b 81 97 d9 41 4f 36 8a ec 66 b3 32 17 e0 86 .....AO6..f.2... 00:22:29.500 000001c0 ef 23 ec 2e 41 38 f9 94 f5 66 57 92 93 c9 ef a0 .#..A8...fW..... 00:22:29.500 000001d0 2b 29 3d 27 33 4b b1 1c 2e 93 40 86 7d 13 1c eb +)='3K....@.}... 00:22:29.500 000001e0 e4 4b 4a f0 58 d9 68 0c 2d 0f 4b 82 44 85 3d 11 .KJ.X.h.-.K.D.=. 00:22:29.500 000001f0 91 b7 4f d6 59 bd 1f 6d 5c f3 f5 bc 08 8c e9 d0 ..O.Y..m\....... 00:22:29.500 [2024-09-27 13:27:14.960225] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key3, hash=2, dhgroup=3, seq=3775755249, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.500 [2024-09-27 13:27:14.960547] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.500 [2024-09-27 13:27:14.985118] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.500 [2024-09-27 13:27:14.985435] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.500 [2024-09-27 13:27:14.985746] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.500 [2024-09-27 13:27:14.985937] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.500 [2024-09-27 13:27:15.037619] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.500 [2024-09-27 13:27:15.037851] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.500 [2024-09-27 13:27:15.038005] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 3 (ffdhe4096) 00:22:29.500 [2024-09-27 13:27:15.038321] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.500 [2024-09-27 13:27:15.038542] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.500 ctrlr pubkey: 00:22:29.500 00000000 97 ee e8 26 9b d3 34 3d 06 d7 a7 fd 70 e8 e4 7f ...&..4=....p... 00:22:29.500 00000010 07 90 b7 54 b4 09 d3 b5 9b a5 22 af 00 27 9d 10 ...T......"..'.. 00:22:29.500 00000020 d0 ad 42 0d db 64 ad de 02 0f d5 8c 95 b4 1f 33 ..B..d.........3 00:22:29.500 00000030 81 25 30 a6 ad 63 8b 5d f9 25 5c 89 87 c6 22 9d .%0..c.].%\...". 00:22:29.500 00000040 d7 c3 f9 34 56 81 39 e8 72 c7 4e d3 2a bd 02 47 ...4V.9.r.N.*..G 00:22:29.500 00000050 8b 0b 02 da 23 7f e4 49 67 a9 90 6a e5 8a 87 63 ....#..Ig..j...c 00:22:29.500 00000060 cd 8a 71 08 14 18 4f 21 7b 24 b5 c2 5b a2 f2 6c ..q...O!{$..[..l 00:22:29.500 00000070 0d aa e5 b0 6d ea 14 b6 65 e1 3d d1 51 7a ea 74 ....m...e.=.Qz.t 00:22:29.500 00000080 a3 88 bf 95 52 b7 69 77 33 a6 d2 12 3b 20 aa 63 ....R.iw3...; .c 00:22:29.500 00000090 0b 53 86 c1 97 56 2e 27 53 ba 48 45 cf 6b b5 6e .S...V.'S.HE.k.n 00:22:29.500 000000a0 69 5e e4 77 e4 f5 12 a8 8a c1 82 2c 98 43 ec c3 i^.w.......,.C.. 00:22:29.500 000000b0 d5 3e a1 66 30 0e e8 c7 36 3a df c1 9c d2 55 25 .>.f0...6:....U% 00:22:29.500 000000c0 3a 2a df 6b c0 eb fa 12 2c 0e 82 f8 1f 3a 8f d4 :*.k....,....:.. 00:22:29.501 000000d0 91 57 7d 04 c1 ab e4 e5 3d 08 6b 60 d3 af 96 7f .W}.....=.k`.... 00:22:29.501 000000e0 4f 7a ae 6b 7e 90 bf 75 82 dc 57 52 84 18 14 58 Oz.k~..u..WR...X 00:22:29.501 000000f0 dc 2d 4b f7 cc 29 74 31 d2 52 52 17 07 f4 1a ca .-K..)t1.RR..... 00:22:29.501 00000100 62 61 23 79 40 9c d0 99 25 1e c3 e4 e0 f9 4c ac ba#y@...%.....L. 00:22:29.501 00000110 70 97 a1 42 aa 17 66 31 ba 3d 37 6b a5 2d 21 17 p..B..f1.=7k.-!. 00:22:29.501 00000120 f6 a7 12 23 e8 c4 fd ff f5 18 09 3b 94 db 6b 43 ...#.......;..kC 00:22:29.501 00000130 70 bf 94 74 ae ef f6 f5 93 60 e5 26 bb b8 35 a5 p..t.....`.&..5. 00:22:29.501 00000140 44 40 96 b8 5b ab b9 cf bf d7 40 a1 3c 5e 4d 82 D@..[.....@.<^M. 00:22:29.501 00000150 7f 5f bb e6 25 73 2f 89 bd af aa 66 4f f5 a4 1a ._..%s/....fO... 00:22:29.501 00000160 4d 8f c3 5f 4d 53 0c a0 e7 6f c4 5d b6 ae 6b 71 M.._MS...o.]..kq 00:22:29.501 00000170 4a 08 dd 99 75 87 4b d6 dd 03 a3 21 89 08 ce 99 J...u.K....!.... 00:22:29.501 00000180 97 d8 86 34 4e e3 c6 f1 a7 1b a0 21 bf 73 bc a0 ...4N......!.s.. 00:22:29.501 00000190 2a a9 fb 94 9a d0 ae 3c 27 0a ac 6f 43 7a 07 3b *......<'..oCz.; 00:22:29.501 000001a0 dd d4 11 ba 87 f0 86 7b fa c1 eb 26 07 bf 9e f0 .......{...&.... 00:22:29.501 000001b0 fa 97 cd 86 68 3f 30 56 81 04 18 0e 5c df 60 f7 ....h?0V....\.`. 00:22:29.501 000001c0 ef e2 e3 1f 92 4d 5c a2 e2 eb a9 2a 1e c5 d1 8b .....M\....*.... 00:22:29.501 000001d0 44 b3 16 b9 7a f6 25 04 bf 49 29 85 a9 0b a2 dd D...z.%..I)..... 00:22:29.501 000001e0 ba 26 18 f1 68 d3 50 14 79 07 0a bb b2 41 d5 ad .&..h.P.y....A.. 00:22:29.501 000001f0 a8 14 99 c4 30 c0 70 91 5e 72 18 02 f5 5c 6f 38 ....0.p.^r...\o8 00:22:29.501 host pubkey: 00:22:29.501 00000000 f7 0d 65 ce d8 8d 0e db 0d b9 bb 33 63 ca 12 9a ..e........3c... 00:22:29.501 00000010 78 18 f2 cc f4 73 90 eb 77 ac 8e 23 55 59 33 58 x....s..w..#UY3X 00:22:29.501 00000020 e3 ee 30 b0 21 74 45 76 0e 1b ae 92 5c 1a 98 be ..0.!tEv....\... 00:22:29.501 00000030 d7 34 a8 5b 82 1b 8c 9d 2f 77 06 97 00 3e 64 28 .4.[..../w...>d( 00:22:29.501 00000040 6a cc 56 a2 11 d2 1b a6 df cc 20 d9 de 2e 25 6b j.V....... ...%k 00:22:29.501 00000050 3d c3 5b 40 90 85 d9 19 ea 19 73 a6 aa 3a 60 fe =.[@......s..:`. 00:22:29.501 00000060 ac 27 d7 86 f9 20 3b fd f4 0c 62 e6 50 83 4f f9 .'... ;...b.P.O. 00:22:29.501 00000070 24 0c 66 2e d8 45 45 f1 67 4c 11 a6 d0 ca fd b1 $.f..EE.gL...... 00:22:29.501 00000080 08 0c 2e 6a fb 30 93 61 3f aa 09 b0 1e 49 aa c0 ...j.0.a?....I.. 00:22:29.501 00000090 2b 6c 00 4c ce 74 cc 07 a5 12 b0 30 8a eb 9a eb +l.L.t.....0.... 00:22:29.501 000000a0 08 13 06 35 9c 85 67 be de 8c 08 5e 32 fc 3e 38 ...5..g....^2.>8 00:22:29.501 000000b0 9e e9 0a b9 74 f7 8f 34 2e be 5f 0f c0 da 4f 41 ....t..4.._...OA 00:22:29.501 000000c0 ae d7 68 b1 e0 b0 aa 2c f5 70 67 22 dd 36 ad 79 ..h....,.pg".6.y 00:22:29.501 000000d0 0d 9e 51 25 8d 47 5d ff fa d3 76 53 f3 ff 6e d3 ..Q%.G]...vS..n. 00:22:29.501 000000e0 5a 2a 45 a8 f5 dc 62 b2 6e 9d e8 b5 ff 2d ce f6 Z*E...b.n....-.. 00:22:29.501 000000f0 ff da 6d 59 0d d5 fa bb b3 1e 75 d9 48 3d 9a 8f ..mY......u.H=.. 00:22:29.501 00000100 5a da 9c 5d 88 fe 6c 6b c9 5c 05 82 77 a7 a1 21 Z..]..lk.\..w..! 00:22:29.501 00000110 1f b9 b9 3a 30 13 ea b3 18 af c5 d4 a4 e7 cf 26 ...:0..........& 00:22:29.501 00000120 d7 cb 9a 1c e0 67 7c 3c 2b 7e 44 85 c6 f3 e6 1c .....g|<+~D..... 00:22:29.501 00000130 8f a8 16 a5 de 0d e8 b7 a1 30 1d a6 69 21 88 2e .........0..i!.. 00:22:29.501 00000140 fa 33 7e 81 26 d7 e1 97 77 2a 7e aa 85 fa 7e 6b .3~.&...w*~...~k 00:22:29.501 00000150 68 ec 5c 58 04 32 84 86 f5 63 02 8d 33 bc 49 51 h.\X.2...c..3.IQ 00:22:29.501 00000160 8b f9 19 7d b2 26 23 f3 49 5a 99 bb 7b b7 f5 bb ...}.&#.IZ..{... 00:22:29.501 00000170 c6 2d 22 77 dd d0 a7 0c de 21 0e 70 18 74 aa 66 .-"w.....!.p.t.f 00:22:29.501 00000180 8d 6f cb 04 4c fa 2e cd 3d fa a3 ed 96 91 5b af .o..L...=.....[. 00:22:29.501 00000190 37 24 c5 71 15 e4 c5 df b6 ed b4 08 c0 b4 d5 2b 7$.q...........+ 00:22:29.501 000001a0 36 fe 77 38 9b 66 58 97 02 d0 e3 62 ba 5d 5d 6a 6.w8.fX....b.]]j 00:22:29.501 000001b0 be a4 29 75 ce 1d b3 eb c0 3e f1 bc 43 8b 78 b7 ..)u.....>..C.x. 00:22:29.501 000001c0 be c4 b6 88 2c f4 49 95 ff 35 67 45 0c 13 b7 01 ....,.I..5gE.... 00:22:29.501 000001d0 bf a3 4a 34 4c de 20 3e b8 45 8d 11 fe 1f 00 8d ..J4L. >.E...... 00:22:29.501 000001e0 49 2d 81 91 ff 68 d1 78 86 c6 70 93 24 98 00 a5 I-...h.x..p.$... 00:22:29.501 000001f0 22 82 fd 25 8e 1e 13 05 f1 21 12 e0 f1 28 5b 70 "..%.....!...([p 00:22:29.501 dh secret: 00:22:29.501 00000000 92 f8 c4 8c 7b c9 0c 39 08 12 fe dc ce 30 e7 df ....{..9.....0.. 00:22:29.501 00000010 8e dc 00 72 b8 4a 4e f3 6e 3f d1 6b 84 3c f3 f6 ...r.JN.n?.k.<.. 00:22:29.501 00000020 15 0c 6e 1f ae b9 14 e6 ef c1 54 72 bd c7 59 ea ..n.......Tr..Y. 00:22:29.501 00000030 af 7d 9b 22 84 ed 9e 46 0b 59 4c 07 ab 89 48 8f .}."...F.YL...H. 00:22:29.501 00000040 64 fa ec 86 91 03 c1 45 48 e6 a3 fa 74 b8 76 68 d......EH...t.vh 00:22:29.501 00000050 1d 91 df 5a f4 c7 81 3f e3 3a 06 2e 39 b9 85 51 ...Z...?.:..9..Q 00:22:29.501 00000060 57 90 19 9b b2 5f 1e 72 be 08 b7 69 66 90 27 1f W...._.r...if.'. 00:22:29.501 00000070 9e bf 5d 8b f1 02 35 d8 05 db 7a 9c 40 84 db 90 ..]...5...z.@... 00:22:29.501 00000080 f4 85 46 5b 54 ca 72 98 2e af ef ed e5 76 87 01 ..F[T.r......v.. 00:22:29.501 00000090 c4 01 2e 9f 77 e4 e3 a9 54 ae 73 c1 1d 9e 49 08 ....w...T.s...I. 00:22:29.501 000000a0 64 70 32 54 23 43 25 7a a5 5a 72 61 3c 18 63 45 dp2T#C%z.Zra<.cE 00:22:29.501 000000b0 a1 dd 2a e7 a1 f0 b0 b9 d5 19 10 59 b5 27 47 d3 ..*........Y.'G. 00:22:29.501 000000c0 46 d0 e7 3b 26 26 ea f5 d0 8e ff db 65 b2 79 cd F..;&&......e.y. 00:22:29.501 000000d0 a1 6c 2c 44 b3 de d5 7d 3f 13 4e e0 d5 4e 34 5a .l,D...}?.N..N4Z 00:22:29.501 000000e0 37 92 9d db 02 3d ed 9b ad 5d 59 c8 c8 82 51 27 7....=...]Y...Q' 00:22:29.501 000000f0 ad 9f c3 4b d0 a0 e2 32 f1 01 47 4a 92 00 0a 02 ...K...2..GJ.... 00:22:29.501 00000100 ca 2c 53 23 0f 49 12 65 c1 24 5a 33 5c 5e 42 82 .,S#.I.e.$Z3\^B. 00:22:29.501 00000110 ff 00 e8 0d 95 31 54 78 f8 4d 82 e1 00 26 f9 1c .....1Tx.M...&.. 00:22:29.501 00000120 19 ae 2c 69 41 b6 42 5f b5 e0 32 ae f5 3f 08 47 ..,iA.B_..2..?.G 00:22:29.501 00000130 ae 85 69 54 3f 9e 35 68 54 8e d9 e4 ae 9e 58 49 ..iT?.5hT.....XI 00:22:29.501 00000140 88 b4 96 ff 4c d2 1a e7 33 0e 17 d0 69 bc 45 9f ....L...3...i.E. 00:22:29.501 00000150 d8 71 c3 95 b2 a5 92 36 5c 38 95 d3 95 4c f0 a0 .q.....6\8...L.. 00:22:29.501 00000160 c9 15 b4 7a 17 a9 80 d7 03 13 bd e4 7f d5 c2 48 ...z...........H 00:22:29.501 00000170 14 bb 59 f2 0f 6a 44 4b a5 90 b2 99 91 be 9d 6d ..Y..jDK.......m 00:22:29.501 00000180 ff 06 20 27 8b a8 e0 a9 72 4b f5 62 b1 4f 24 47 .. '....rK.b.O$G 00:22:29.501 00000190 8a cf 9c 87 fd af bf 61 d2 f1 ff 55 14 9c 11 d4 .......a...U.... 00:22:29.501 000001a0 5e f4 58 3a eb ed cc dd eb ca f1 38 b5 53 af 55 ^.X:.......8.S.U 00:22:29.501 000001b0 3e 9e 64 45 8a 06 fd d3 f5 cf 65 b1 69 b8 d5 ac >.dE......e.i... 00:22:29.501 000001c0 66 e7 75 b9 0f 5f 81 2b af 36 90 87 a5 6f 33 8e f.u.._.+.6...o3. 00:22:29.501 000001d0 11 1f c0 0c 4b 7b 75 81 d6 e9 db bc 23 e4 28 e8 ....K{u.....#.(. 00:22:29.501 000001e0 20 66 f3 52 61 fd 2c a2 23 9c 25 82 e3 c2 e8 ec f.Ra.,.#.%..... 00:22:29.501 000001f0 d9 bc a9 5e 78 63 0e 2f 80 49 b7 16 67 00 dd c5 ...^xc./.I..g... 00:22:29.501 [2024-09-27 13:27:15.064774] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key3, hash=2, dhgroup=3, seq=3775755250, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.501 [2024-09-27 13:27:15.065069] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.501 [2024-09-27 13:27:15.092784] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.501 [2024-09-27 13:27:15.093114] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.501 [2024-09-27 13:27:15.093387] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.501 [2024-09-27 13:27:15.093606] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.501 [2024-09-27 13:27:15.201860] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.501 [2024-09-27 13:27:15.202112] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.501 [2024-09-27 13:27:15.202239] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 3 (ffdhe4096) 00:22:29.501 [2024-09-27 13:27:15.202525] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.501 [2024-09-27 13:27:15.202791] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.501 ctrlr pubkey: 00:22:29.501 00000000 5a 97 76 70 f4 b8 74 45 d9 b4 55 6c 1a c7 a2 55 Z.vp..tE..Ul...U 00:22:29.501 00000010 3f 14 de 54 59 5b 9a 37 de de af a8 dd fa bc df ?..TY[.7........ 00:22:29.501 00000020 08 95 a6 b7 fb c0 54 ef c4 39 47 d9 be 96 c4 a7 ......T..9G..... 00:22:29.501 00000030 8a 4b cb 2f 70 3d df 02 79 dc cd e1 e8 63 c4 58 .K./p=..y....c.X 00:22:29.501 00000040 dd 9a 93 c9 b3 c0 a4 a4 c0 bd b2 ec 0d dd 35 7d ..............5} 00:22:29.501 00000050 78 94 61 2c 7c 81 e5 3a 52 ad 1f 8e fb 30 9b 08 x.a,|..:R....0.. 00:22:29.501 00000060 40 7b b1 bb 3e 0e f6 b9 0a 6d 44 2b 55 46 57 0d @{..>....mD+UFW. 00:22:29.501 00000070 6e 67 7f 9d 3e 8b 2d fd 59 ef 47 08 f8 30 7c c7 ng..>.-.Y.G..0|. 00:22:29.501 00000080 06 5e 80 8f b3 bb a6 09 18 46 ed 38 67 02 7b de .^.......F.8g.{. 00:22:29.501 00000090 9b 43 f0 31 f6 8b 37 97 f6 17 35 d4 5b ae 0e f7 .C.1..7...5.[... 00:22:29.501 000000a0 7e 59 c1 20 e1 17 0f ed 7f 3b 2f b1 35 77 d2 7b ~Y. .....;/.5w.{ 00:22:29.501 000000b0 29 66 cd 99 46 bc 3b 93 7e d4 91 31 00 3c d4 ff )f..F.;.~..1.<.. 00:22:29.501 000000c0 1a f6 b7 80 cc 73 20 a9 53 9f 0e 8e aa db b3 e4 .....s .S....... 00:22:29.501 000000d0 d6 1e 4f f2 63 b4 91 a4 e3 43 19 db 07 e7 21 c2 ..O.c....C....!. 00:22:29.501 000000e0 48 f1 29 13 ee 84 47 17 23 9d 5f 34 09 3b 51 1a H.)...G.#._4.;Q. 00:22:29.501 000000f0 dd fb be 13 c1 f7 a3 5c 73 c6 ba 95 6b 7c bc f1 .......\s...k|.. 00:22:29.501 00000100 3b 10 26 97 e3 a0 bf 79 63 12 b4 c0 19 6c 74 89 ;.&....yc....lt. 00:22:29.501 00000110 2d 5d 2b 98 94 0d a7 6a f0 7e 6f a6 e5 0a 6d 0d -]+....j.~o...m. 00:22:29.501 00000120 4f f1 d9 94 a8 ff 86 69 12 d0 18 46 51 5c dd e4 O......i...FQ\.. 00:22:29.501 00000130 6e 7a 22 90 63 e5 d7 ad 51 00 51 6a 2e 1c e4 20 nz".c...Q.Qj... 00:22:29.501 00000140 e6 f1 ff 8d 98 3c 30 97 5b 83 98 96 4f 71 f3 b2 .....<0.[...Oq.. 00:22:29.501 00000150 00 88 6a 2b d5 65 b8 65 bc c2 df 62 39 06 2e f3 ..j+.e.e...b9... 00:22:29.501 00000160 34 f3 35 18 cd 16 68 ee 64 f8 3d ba db 5d 6d 01 4.5...h.d.=..]m. 00:22:29.501 00000170 99 fe 95 a1 8f 92 ad 8d 57 ff f2 26 9e a2 96 e9 ........W..&.... 00:22:29.501 00000180 36 25 8d 1c e8 cc 37 1a c3 91 0b 0c 56 b1 ce b1 6%....7.....V... 00:22:29.501 00000190 69 f4 0f 57 8e bd cb c5 40 bc d1 94 45 08 06 3d i..W....@...E..= 00:22:29.501 000001a0 b0 93 b0 26 08 1f 3c eb bf 82 49 d2 a3 af f5 79 ...&..<...I....y 00:22:29.501 000001b0 af aa 2d 96 ee ab 2b 23 ca e5 29 73 68 4b a9 bc ..-...+#..)shK.. 00:22:29.501 000001c0 92 6e 35 24 9c 76 0b 82 9a 47 66 6a 79 49 42 2f .n5$.v...GfjyIB/ 00:22:29.501 000001d0 33 44 a9 cc d7 ff 22 62 8f ab 98 3d 29 ae ac 9e 3D...."b...=)... 00:22:29.501 000001e0 12 e4 ec ca 86 00 61 ba 41 17 fa bf 2c 8d f9 d3 ......a.A...,... 00:22:29.501 000001f0 03 dc 44 05 11 98 8e 4b e3 33 69 ac 43 21 47 6d ..D....K.3i.C!Gm 00:22:29.501 host pubkey: 00:22:29.501 00000000 b4 bf f5 f5 7d ff 58 df 71 ee 17 a1 0c 4f b2 84 ....}.X.q....O.. 00:22:29.501 00000010 48 53 f8 80 35 e1 e7 97 65 85 87 cb a2 07 4b 63 HS..5...e.....Kc 00:22:29.501 00000020 d3 67 be 02 8b 74 25 1d d4 d8 05 41 9c fe a3 18 .g...t%....A.... 00:22:29.501 00000030 59 50 43 18 03 73 7b 36 bb 93 83 15 03 af 2f 2a YPC..s{6....../* 00:22:29.502 00000040 13 cf 26 f6 9a a8 06 55 48 e9 c5 aa 84 d9 39 77 ..&....UH.....9w 00:22:29.502 00000050 41 15 90 08 43 5a 7e 16 f4 63 f9 d7 67 bb 20 e4 A...CZ~..c..g. . 00:22:29.502 00000060 81 50 86 ff b2 9f 4c 8c 94 53 e8 da 69 26 09 0e .P....L..S..i&.. 00:22:29.502 00000070 19 1f bc 43 4f 68 1a 12 03 f4 1c f1 6e 08 c4 95 ...COh......n... 00:22:29.502 00000080 81 7b 57 19 70 6c c4 27 42 21 1a ef 12 bc 20 c2 .{W.pl.'B!.... . 00:22:29.502 00000090 a5 50 36 4e 05 af 6c 9c 97 88 0d ff a6 e1 48 53 .P6N..l.......HS 00:22:29.502 000000a0 28 64 78 27 64 bd d8 d2 23 e3 84 d6 da eb 1b 0f (dx'd...#....... 00:22:29.502 000000b0 05 ed ba 07 38 09 86 0b 9d 31 96 c2 5e 5d 63 cc ....8....1..^]c. 00:22:29.502 000000c0 6a 6a 62 9b 98 70 86 36 90 99 e3 24 bb 02 f3 a4 jjb..p.6...$.... 00:22:29.502 000000d0 9e b0 c6 5b 3f d2 5c 69 69 8e 97 33 5b 30 9b e3 ...[?.\ii..3[0.. 00:22:29.502 000000e0 37 25 03 06 db 9e f8 8f 3e d4 4f 6e 67 9a 28 76 7%......>.Ong.(v 00:22:29.502 000000f0 34 59 3f 4b 8a d4 a2 b1 0d dc 51 e2 dd c7 b6 74 4Y?K......Q....t 00:22:29.502 00000100 1a 75 da 7b 6e 62 5c 7c e1 fc f7 ed 96 30 9c 06 .u.{nb\|.....0.. 00:22:29.502 00000110 8a f5 19 4e 4f 32 67 cc 6d 1c ca e3 96 bc 66 08 ...NO2g.m.....f. 00:22:29.502 00000120 dd a5 62 3c 81 69 22 0d 54 2e 38 a6 ed 15 5d fd ..b<.i".T.8...]. 00:22:29.502 00000130 c2 37 1a 42 40 8b e9 4d c1 f8 86 54 4c f6 a3 91 .7.B@..M...TL... 00:22:29.502 00000140 76 e5 fc 51 91 f0 69 d4 16 d2 2f 3e c1 63 94 a2 v..Q..i.../>.c.. 00:22:29.502 00000150 34 0d b4 51 d9 10 83 ed 93 23 01 97 f6 8d 6b 06 4..Q.....#....k. 00:22:29.502 00000160 b8 f2 80 c4 5f 1a 19 50 22 d0 c8 0b d8 02 03 8d ...._..P"....... 00:22:29.502 00000170 8a 41 12 a1 a7 91 a6 6b 36 76 31 25 ae 8a 9f f9 .A.....k6v1%.... 00:22:29.502 00000180 2a 6d 75 45 5e 46 f2 dd 9d 72 eb 72 4e 8c 5a 34 *muE^F...r.rN.Z4 00:22:29.502 00000190 7f 8c 2f 77 ec 55 4c 64 ef 86 4b 9e 82 3e 64 ac ../w.ULd..K..>d. 00:22:29.502 000001a0 bc 9c 5c ac d6 49 9a ad 47 af 90 62 e3 ae cf d2 ..\..I..G..b.... 00:22:29.502 000001b0 36 07 3a 5f 20 0c 03 66 7d b0 b0 56 f4 3b a4 6b 6.:_ ..f}..V.;.k 00:22:29.502 000001c0 a9 4b ed 6e 89 d2 6e 9c 3c 72 b9 bb 78 23 b3 fc .K.n..n.~ 00:22:29.502 000001e0 21 ab a7 60 13 5e d6 0a 3d f1 d5 ae 8e fc d4 a2 !..`.^..=....... 00:22:29.502 000001f0 14 a1 d2 2c 08 80 1e ab e3 e3 82 2c 19 be 56 17 ...,.......,..V. 00:22:29.502 dh secret: 00:22:29.502 00000000 73 7c ca 8b 6b fe 06 80 1f d4 1a 0b 57 79 5a 02 s|..k.......WyZ. 00:22:29.502 00000010 63 f5 e7 ba 0b 97 91 d2 ce 05 6a 59 b4 f1 f3 1d c.........jY.... 00:22:29.502 00000020 57 8c 91 2f 2f 75 80 55 a0 6e 42 c0 4b b8 6a cf W..//u.U.nB.K.j. 00:22:29.502 00000030 dc 41 6f 5b aa 8f b7 4d 06 c3 2a f2 00 2d 60 46 .Ao[...M..*..-`F 00:22:29.502 00000040 b3 f1 69 c7 f8 e0 57 b5 94 59 fc 70 1c 8f 14 2e ..i...W..Y.p.... 00:22:29.502 00000050 c5 6a 25 fd a4 46 3c 47 c8 9e 01 9e 25 ba 75 99 .j%..F4.hnV....}X.. 00:22:29.502 00000140 d9 ac c5 7d 37 4c 08 fe 27 1c 5b fe 94 b2 c5 bb ...}7L..'.[..... 00:22:29.502 00000150 bf 71 ca 1f fc fa e1 8d 08 96 44 c7 3c 3f fa 93 .q........D.....mD+UFW. 00:22:29.502 00000070 6e 67 7f 9d 3e 8b 2d fd 59 ef 47 08 f8 30 7c c7 ng..>.-.Y.G..0|. 00:22:29.502 00000080 06 5e 80 8f b3 bb a6 09 18 46 ed 38 67 02 7b de .^.......F.8g.{. 00:22:29.502 00000090 9b 43 f0 31 f6 8b 37 97 f6 17 35 d4 5b ae 0e f7 .C.1..7...5.[... 00:22:29.502 000000a0 7e 59 c1 20 e1 17 0f ed 7f 3b 2f b1 35 77 d2 7b ~Y. .....;/.5w.{ 00:22:29.502 000000b0 29 66 cd 99 46 bc 3b 93 7e d4 91 31 00 3c d4 ff )f..F.;.~..1.<.. 00:22:29.502 000000c0 1a f6 b7 80 cc 73 20 a9 53 9f 0e 8e aa db b3 e4 .....s .S....... 00:22:29.502 000000d0 d6 1e 4f f2 63 b4 91 a4 e3 43 19 db 07 e7 21 c2 ..O.c....C....!. 00:22:29.502 000000e0 48 f1 29 13 ee 84 47 17 23 9d 5f 34 09 3b 51 1a H.)...G.#._4.;Q. 00:22:29.502 000000f0 dd fb be 13 c1 f7 a3 5c 73 c6 ba 95 6b 7c bc f1 .......\s...k|.. 00:22:29.502 00000100 3b 10 26 97 e3 a0 bf 79 63 12 b4 c0 19 6c 74 89 ;.&....yc....lt. 00:22:29.502 00000110 2d 5d 2b 98 94 0d a7 6a f0 7e 6f a6 e5 0a 6d 0d -]+....j.~o...m. 00:22:29.502 00000120 4f f1 d9 94 a8 ff 86 69 12 d0 18 46 51 5c dd e4 O......i...FQ\.. 00:22:29.502 00000130 6e 7a 22 90 63 e5 d7 ad 51 00 51 6a 2e 1c e4 20 nz".c...Q.Qj... 00:22:29.502 00000140 e6 f1 ff 8d 98 3c 30 97 5b 83 98 96 4f 71 f3 b2 .....<0.[...Oq.. 00:22:29.502 00000150 00 88 6a 2b d5 65 b8 65 bc c2 df 62 39 06 2e f3 ..j+.e.e...b9... 00:22:29.502 00000160 34 f3 35 18 cd 16 68 ee 64 f8 3d ba db 5d 6d 01 4.5...h.d.=..]m. 00:22:29.502 00000170 99 fe 95 a1 8f 92 ad 8d 57 ff f2 26 9e a2 96 e9 ........W..&.... 00:22:29.502 00000180 36 25 8d 1c e8 cc 37 1a c3 91 0b 0c 56 b1 ce b1 6%....7.....V... 00:22:29.502 00000190 69 f4 0f 57 8e bd cb c5 40 bc d1 94 45 08 06 3d i..W....@...E..= 00:22:29.502 000001a0 b0 93 b0 26 08 1f 3c eb bf 82 49 d2 a3 af f5 79 ...&..<...I....y 00:22:29.502 000001b0 af aa 2d 96 ee ab 2b 23 ca e5 29 73 68 4b a9 bc ..-...+#..)shK.. 00:22:29.502 000001c0 92 6e 35 24 9c 76 0b 82 9a 47 66 6a 79 49 42 2f .n5$.v...GfjyIB/ 00:22:29.502 000001d0 33 44 a9 cc d7 ff 22 62 8f ab 98 3d 29 ae ac 9e 3D...."b...=)... 00:22:29.502 000001e0 12 e4 ec ca 86 00 61 ba 41 17 fa bf 2c 8d f9 d3 ......a.A...,... 00:22:29.502 000001f0 03 dc 44 05 11 98 8e 4b e3 33 69 ac 43 21 47 6d ..D....K.3i.C!Gm 00:22:29.502 host pubkey: 00:22:29.502 00000000 1a 4e 79 91 5c ee 8b a2 7a 92 9d ee b1 97 71 a8 .Ny.\...z.....q. 00:22:29.502 00000010 38 35 f3 4f 1f e4 1a 29 a7 c4 9f 9f fd 20 21 89 85.O...)..... !. 00:22:29.502 00000020 82 13 eb ea 22 dc 22 03 1d 6a c8 d9 18 c1 79 be ...."."..j....y. 00:22:29.502 00000030 21 11 a4 96 67 31 f2 74 f7 fe db dd 6e e9 36 2c !...g1.t....n.6, 00:22:29.502 00000040 d9 6d 67 ba b7 7e e1 4c a5 e9 f7 8f 28 d8 3c a3 .mg..~.L....(.<. 00:22:29.502 00000050 1d b3 01 99 dd f3 57 e2 59 c1 ef dd 6e b3 55 74 ......W.Y...n.Ut 00:22:29.502 00000060 c5 58 aa 0d b4 0d 86 c6 e7 32 5a dd a4 fa f1 66 .X.......2Z....f 00:22:29.502 00000070 88 36 99 4c 88 c2 ad 1b 7c 26 7b f4 67 dc fe 09 .6.L....|&{.g... 00:22:29.502 00000080 e5 00 bf 23 33 bc a5 57 ff 18 df 55 6b 75 ac 30 ...#3..W...Uku.0 00:22:29.502 00000090 3c b9 d8 2a f1 cb 9b d0 ae 9d eb c6 e4 85 31 cc <..*..........1. 00:22:29.502 000000a0 ad e4 76 31 d9 a1 42 ca 48 d3 d2 f2 21 14 8d 58 ..v1..B.H...!..X 00:22:29.502 000000b0 82 c9 b7 6d 2e 52 e5 75 89 9e cb e5 a1 fe 32 62 ...m.R.u......2b 00:22:29.502 000000c0 3b a4 a5 7f 50 26 99 82 e1 48 07 1b 95 73 5e ea ;...P&...H...s^. 00:22:29.502 000000d0 99 a3 65 73 ab 94 e0 8c 72 0b d0 4a ed 39 9c 81 ..es....r..J.9.. 00:22:29.502 000000e0 96 12 15 b1 94 b1 9e 67 66 e0 9f 9d f2 d4 60 44 .......gf.....`D 00:22:29.503 000000f0 f7 38 a9 da c3 45 dd 9a 7e 33 02 95 ca 60 3a d1 .8...E..~3...`:. 00:22:29.503 00000100 ce 09 59 4b c5 56 83 5c 2e 7f 46 1a 9f 52 40 8f ..YK.V.\..F..R@. 00:22:29.503 00000110 b9 8b 3c 8c 35 9c eb ae 1a c7 72 1e b5 78 cf 47 ..<.5.....r..x.G 00:22:29.503 00000120 19 2d 66 8f 46 b0 7d 27 8a fd 96 43 5c 1f c8 c1 .-f.F.}'...C\... 00:22:29.503 00000130 b8 91 07 f6 f3 07 36 e0 e1 03 e2 af 38 91 35 46 ......6.....8.5F 00:22:29.503 00000140 b8 36 cd 33 2c be bd 29 c4 33 ef b5 f1 67 a2 ed .6.3,..).3...g.. 00:22:29.503 00000150 b1 b2 75 b8 a3 ef e5 a2 b2 d2 f1 5e e1 6a d8 9b ..u........^.j.. 00:22:29.503 00000160 19 c5 98 c6 4f 13 8c ff aa 20 4b bb 06 12 25 e5 ....O.... K...%. 00:22:29.503 00000170 25 f3 eb 8f 32 c8 6c 38 3e 20 b2 33 1a 4c aa 7b %...2.l8> .3.L.{ 00:22:29.503 00000180 2b df de 4f b6 b9 05 44 16 16 b6 68 c3 79 fa fc +..O...D...h.y.. 00:22:29.503 00000190 28 f2 24 9d 8d 2c 6e ff cf 80 d2 d4 ea 77 42 0b (.$..,n......wB. 00:22:29.503 000001a0 7b cd 0b c1 4f 40 5d c8 cd cf eb cb 74 4d 7a 6f {...O@].....tMzo 00:22:29.503 000001b0 56 02 92 6f 2d 2f 5b dd 2f 32 83 a8 73 e8 50 91 V..o-/[./2..s.P. 00:22:29.503 000001c0 6e c3 2a be c3 a3 eb 5e b7 9f d6 c9 40 1d 5b d2 n.*....^....@.[. 00:22:29.503 000001d0 d0 28 0f 59 a9 e8 d9 aa 46 27 87 c1 29 31 20 35 .(.Y....F'..)1 5 00:22:29.503 000001e0 00 ab 9d 8d a2 a4 b1 87 40 95 32 f0 2f bb 01 b8 ........@.2./... 00:22:29.503 000001f0 4c 17 4b 94 53 a0 1d 9d 71 e1 60 6f 47 06 3a cd L.K.S...q.`oG.:. 00:22:29.503 dh secret: 00:22:29.503 00000000 84 bb a2 4d 7a 7b 60 d1 60 42 eb e1 c5 c3 7d 0d ...Mz{`.`B....}. 00:22:29.503 00000010 f7 94 00 10 62 c7 ac ed fd ed 06 c7 ed a9 cf f5 ....b........... 00:22:29.503 00000020 b2 66 6c 3f 27 2b 85 58 a9 53 6a 1d c8 d0 04 f5 .fl?'+.X.Sj..... 00:22:29.503 00000030 9e e3 af 30 3b 6f 68 bb 80 25 1d 46 35 ec 05 67 ...0;oh..%.F5..g 00:22:29.503 00000040 43 fd 53 c0 cc d5 32 c3 9e 11 95 26 17 64 34 e7 C.S...2....&.d4. 00:22:29.503 00000050 85 2f b3 36 bb 40 86 4d 3f 87 0f 96 f6 85 5e 08 ./.6.@.M?.....^. 00:22:29.503 00000060 ee 10 dc 0a ad 23 0d a4 c9 96 1f 5e 00 01 53 d4 .....#.....^..S. 00:22:29.503 00000070 b8 57 db 87 36 cd e0 89 50 c7 7d 0b 9f 4d 1d 9c .W..6...P.}..M.. 00:22:29.503 00000080 f9 98 3a 65 d4 fc c4 62 2f 31 d9 04 f5 32 fd d8 ..:e...b/1...2.. 00:22:29.503 00000090 e1 b5 8e dc f0 3d 66 9c 60 b9 6b fa 85 52 6c 10 .....=f.`.k..Rl. 00:22:29.503 000000a0 1a 66 d3 d2 22 83 78 57 1e 87 b1 55 f4 10 cd ee .f..".xW...U.... 00:22:29.503 000000b0 d6 5f dd 66 b6 5e 75 69 d6 9c cd 36 f4 86 d2 57 ._.f.^ui...6...W 00:22:29.503 000000c0 51 9e 01 63 f4 09 4c 2f 6c 0a 69 e9 fc 87 ae 25 Q..c..L/l.i....% 00:22:29.503 000000d0 cb 7a ee 56 34 da 85 08 c4 2e 04 bc ce 2e a6 3c .z.V4..........< 00:22:29.503 000000e0 c1 61 e1 85 89 39 33 bf 5c 68 34 f8 cc 70 f2 a6 .a...93.\h4..p.. 00:22:29.503 000000f0 a2 be e9 32 67 8b 1c ef ce e8 18 89 96 3c d2 92 ...2g........<.. 00:22:29.503 00000100 3f d2 18 bd 85 50 44 e8 1e 43 9a 10 d4 58 9d 03 ?....PD..C...X.. 00:22:29.503 00000110 8d d0 fe fd 8e 75 eb 9d 28 97 5a 8b 90 02 42 9a .....u..(.Z...B. 00:22:29.503 00000120 c4 8d c9 24 eb f4 b8 4e f1 df d6 bb 7f c9 dc b4 ...$...N........ 00:22:29.503 00000130 ba 9b 12 ef 77 31 97 79 b7 30 3c 70 49 21 57 44 ....w1.y.0.x. 00:22:29.503 00000290 83 be b6 3c 5a ff df 23 ec b7 bf ae ab 29 f0 40 .... 00:22:29.503 00000080 1a 4e 6a d2 ee 79 f9 01 c4 be 4e 2b b4 ef f5 97 .Nj..y....N+.... 00:22:29.503 00000090 ad d0 80 79 a2 c1 36 35 4b 6b 3e 23 08 e5 39 f8 ...y..65Kk>#..9. 00:22:29.503 000000a0 92 72 1a e0 19 b6 de 0b 4b 67 e8 dd b2 8c 3a 37 .r......Kg....:7 00:22:29.503 000000b0 ba 9a de 07 67 da b4 c9 e1 43 bb fd 6d d7 06 4b ....g....C..m..K 00:22:29.503 000000c0 19 d6 59 c7 c2 5b bc b1 4d 7c f4 4d 4b fe a0 65 ..Y..[..M|.MK..e 00:22:29.503 000000d0 00 bf fe d2 77 28 db c8 d8 78 f0 e5 cc 2e a6 69 ....w(...x.....i 00:22:29.503 000000e0 80 d2 4a e4 de bd 3b 8e 6d db ff 92 bd 73 83 a7 ..J...;.m....s.. 00:22:29.503 000000f0 5c 44 2f 62 4f b6 19 81 a4 22 3e 39 b1 00 12 da \D/bO....">9.... 00:22:29.503 00000100 53 31 dd b6 18 2c 73 3d 1d 13 c2 75 de 62 be 9f S1...,s=...u.b.. 00:22:29.503 00000110 de 02 e3 e9 d2 da e3 b9 5f 0a a2 41 48 47 8a 7d ........_..AHG.} 00:22:29.503 00000120 d6 75 3d bf f0 99 a5 c7 ec 8e 5c 7e 9a 13 e7 86 .u=.......\~.... 00:22:29.503 00000130 9f 31 4d 0f 5a 91 3c b5 71 c9 93 f2 60 57 e0 39 .1M.Z.<.q...`W.9 00:22:29.503 00000140 9f c6 54 d4 2b bc 41 f1 b9 e2 ee 63 ee 84 6d 53 ..T.+.A....c..mS 00:22:29.503 00000150 90 f5 c7 23 fa fc 4b 4d 96 17 fb d5 bb 4c 91 bc ...#..KM.....L.. 00:22:29.503 00000160 e9 7e eb 22 e4 fc e7 da 12 8d e3 88 b2 da 18 c6 .~."............ 00:22:29.504 00000170 9c 0a 12 9e 62 8d 94 01 a9 34 38 5d 57 c8 c3 a1 ....b....48]W... 00:22:29.504 00000180 2f b3 9e ec 2c 36 f2 d9 1a 02 04 3c 1b db 23 89 /...,6.....<..#. 00:22:29.504 00000190 fb 0d fa 68 a0 08 75 86 9f 5a 20 c5 36 13 59 c7 ...h..u..Z .6.Y. 00:22:29.504 000001a0 c5 73 90 19 34 e3 07 e9 f3 ce c0 b4 a3 ab f0 24 .s..4..........$ 00:22:29.504 000001b0 50 dc 3e 91 25 b9 9c 3b a1 c2 22 e4 90 67 2a 61 P.>.%..;.."..g*a 00:22:29.504 000001c0 64 e2 81 a9 c3 e4 88 66 fc 3d 1f d7 b2 98 de e5 d......f.=...... 00:22:29.504 000001d0 d2 65 7d 69 9b de 3a 9f 52 dd ae a7 6e e5 d3 82 .e}i..:.R...n... 00:22:29.504 000001e0 bd 5b aa 05 5b 37 82 b6 bd fc ef 86 70 a5 ef 16 .[..[7......p... 00:22:29.504 000001f0 14 4c 48 a8 6a 38 e0 dd e6 01 1b 1c c3 90 6f 04 .LH.j8........o. 00:22:29.504 00000200 5d 42 71 3c 4d 50 0b 5a 8e 9a 9f da 7b b5 ef 56 ]Bq.}t.Q.b..... 00:22:29.504 000002e0 55 92 c1 32 88 d3 f3 82 44 f6 ad 27 04 2a 70 69 U..2....D..'.*pi 00:22:29.504 000002f0 90 5e 91 72 84 2d 4d 57 f6 14 c7 90 b1 1e b4 f6 .^.r.-MW........ 00:22:29.504 dh secret: 00:22:29.504 00000000 84 eb ac 27 e2 95 36 00 d2 db 36 cd af ee d6 97 ...'..6...6..... 00:22:29.504 00000010 7f f8 77 eb e2 34 b2 bc c3 62 79 b5 16 50 70 0f ..w..4...by..Pp. 00:22:29.504 00000020 3b b0 61 92 e3 b5 da 9b a9 d7 80 1b 6a 37 ec 2e ;.a.........j7.. 00:22:29.504 00000030 e4 88 26 82 50 74 92 4b 3c 8e df d7 cc 29 bb 62 ..&.Pt.K<....).b 00:22:29.504 00000040 18 b3 89 f8 0b f0 95 ab 83 f4 97 72 d5 a3 b2 23 ...........r...# 00:22:29.504 00000050 91 f6 17 d9 fc e4 68 c9 80 19 b8 b3 0c 6f d5 d4 ......h......o.. 00:22:29.504 00000060 ea ef c3 9b 9b b2 09 0d 23 b9 b9 36 df d4 1c e2 ........#..6.... 00:22:29.504 00000070 83 3b 52 12 9e 9a f4 4b 86 ef b5 b1 ee 7f 08 af .;R....K........ 00:22:29.504 00000080 6b d6 1b f4 21 8b 62 18 f8 d6 f9 cd 84 ca a8 72 k...!.b........r 00:22:29.504 00000090 37 97 f8 10 86 f8 9d 59 b4 06 31 4d 54 54 b9 32 7......Y..1MTT.2 00:22:29.504 000000a0 7e 9e d7 28 c7 92 02 e1 d3 cb 0f 8f ad 84 21 c6 ~..(..........!. 00:22:29.504 000000b0 5a d9 1a 76 3f 80 72 8a 47 57 dc 03 ba 02 34 31 Z..v?.r.GW....41 00:22:29.504 000000c0 d0 74 86 12 40 59 50 e6 7f ef 81 1b cc e9 2d 58 .t..@YP.......-X 00:22:29.504 000000d0 ff c6 06 f8 a8 f1 59 0a 14 db a0 30 4b ae 99 e3 ......Y....0K... 00:22:29.504 000000e0 8c 65 6a 41 2d 22 8c 81 39 08 fc 0a d4 fc af 75 .ejA-"..9......u 00:22:29.504 000000f0 05 bb ad 94 6c a2 a3 72 ad 12 42 9a 6b 7a 0d 54 ....l..r..B.kz.T 00:22:29.504 00000100 68 a7 a2 a9 11 91 1b c3 0d 14 ef 4b 87 bb 13 82 h..........K.... 00:22:29.504 00000110 28 17 71 70 30 8e 3d f3 06 7e 12 fb b4 cf 1c 27 (.qp0.=..~.....' 00:22:29.504 00000120 61 09 be 3b 37 9f f0 00 55 4d 1f 84 6a fc 7e cf a..;7...UM..j.~. 00:22:29.504 00000130 dc 62 24 78 8d b3 c5 27 cd 7b 54 aa 04 ae 57 82 .b$x...'.{T...W. 00:22:29.504 00000140 43 fd e4 76 e8 a7 94 09 f0 20 39 3d 0e c6 49 20 C..v..... 9=..I 00:22:29.504 00000150 e4 20 62 7d a6 8e e5 da cf b5 2c a8 07 1b 0d 53 . b}......,....S 00:22:29.504 00000160 68 46 3b 03 b0 99 ce 65 57 96 61 ec 5e 0a 88 fd hF;....eW.a.^... 00:22:29.504 00000170 d6 2d e5 33 65 ba 2c 21 f7 25 e7 6f 7c 50 8f f0 .-.3e.,!.%.o|P.. 00:22:29.504 00000180 fc ff bf f0 12 ca 4b a6 fa 6e 58 9b b9 f0 b6 9e ......K..nX..... 00:22:29.504 00000190 c9 b3 e2 d4 d1 99 fd 7c 82 84 85 96 7f 06 5e 3d .......|......^= 00:22:29.504 000001a0 6d 52 b4 4a e7 71 0e 61 18 d9 a9 ae 66 66 30 f9 mR.J.q.a....ff0. 00:22:29.504 000001b0 0e 5b f3 ed 98 48 15 6b 77 41 9f 51 cb a4 bc 3c .[...H.kwA.Q...< 00:22:29.504 000001c0 f7 54 3c a8 ce 69 ef 8b 4c 6b 81 d6 e2 06 26 c4 .T<..i..Lk....&. 00:22:29.504 000001d0 19 55 c6 25 58 55 b0 11 06 6a 84 2d bc 57 98 6a .U.%XU...j.-.W.j 00:22:29.504 000001e0 f4 c0 b6 45 e1 09 1c 65 f7 90 da f8 91 db e3 2a ...E...e.......* 00:22:29.504 000001f0 70 f7 d9 be 74 fc 6c 1f 89 91 d6 8c a4 26 17 57 p...t.l......&.W 00:22:29.504 00000200 92 63 1f 42 c1 52 67 96 6a eb c4 cc 6f d7 26 51 .c.B.Rg.j...o.&Q 00:22:29.504 00000210 f4 59 78 27 a9 fd 88 ed 66 e3 11 ae 86 d3 4b df .Yx'....f.....K. 00:22:29.504 00000220 42 2a 30 b6 c6 a0 2f c8 25 2c c4 ab 0e 61 97 1f B*0.../.%,...a.. 00:22:29.504 00000230 08 84 a1 ec e3 3f 70 42 f3 ec 14 01 7c 72 38 e0 .....?pB....|r8. 00:22:29.504 00000240 83 e3 22 0e 8c dc 05 9c b9 83 3c b5 e7 07 ca 3a ..".......<....: 00:22:29.504 00000250 b5 48 df 3b b7 0d b3 51 b5 25 a9 bf 5d 2f 50 4e .H.;...Q.%..]/PN 00:22:29.504 00000260 74 03 0d 05 aa ca fe cc ff 79 45 df bd 12 39 d2 t........yE...9. 00:22:29.504 00000270 db f0 b3 af c6 47 f9 63 07 d6 4f 23 ef d0 85 bc .....G.c..O#.... 00:22:29.504 00000280 c9 79 34 4c a0 46 1f e7 0b 15 52 2b 5a 3c 78 e9 .y4L.F....R+Z...>V.../` 00:22:29.504 [2024-09-27 13:27:15.576292] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key0, hash=2, dhgroup=4, seq=3775755253, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.504 [2024-09-27 13:27:15.576566] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.504 [2024-09-27 13:27:15.626580] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.504 [2024-09-27 13:27:15.626989] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.504 [2024-09-27 13:27:15.627204] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.504 [2024-09-27 13:27:15.627425] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.504 [2024-09-27 13:27:15.679365] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.504 [2024-09-27 13:27:15.679596] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.504 [2024-09-27 13:27:15.679789] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.504 [2024-09-27 13:27:15.679972] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.504 [2024-09-27 13:27:15.680202] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.504 ctrlr pubkey: 00:22:29.504 00000000 f1 49 f8 a2 02 72 f6 cc d8 e0 fa 95 6a f5 c6 13 .I...r......j... 00:22:29.504 00000010 6b 11 0b 83 da a4 10 35 c5 f4 72 a0 8a 34 6f 93 k......5..r..4o. 00:22:29.504 00000020 e0 6f 86 ff df 1a 15 dd f8 fc 5e 06 33 37 28 a9 .o........^.37(. 00:22:29.504 00000030 94 2f 88 6a ba 3d 11 70 59 5a 55 5c c9 54 29 d5 ./.j.=.pYZU\.T). 00:22:29.504 00000040 62 77 39 93 fa 2d 0a aa 42 88 f2 b9 19 44 27 d7 bw9..-..B....D'. 00:22:29.504 00000050 5e 59 24 45 f0 93 cb d8 59 1f 25 7f 94 13 8c db ^Y$E....Y.%..... 00:22:29.504 00000060 0d fb 03 cd ec db 7f f9 c3 95 7d e7 66 1b 22 35 ..........}.f."5 00:22:29.504 00000070 f0 4f 16 76 7e 51 6c 28 63 c3 4c ad 0e 47 03 d5 .O.v~Ql(c.L..G.. 00:22:29.504 00000080 07 72 22 7a 98 31 3c ad d1 dc 66 ac 05 17 f9 c6 .r"z.1<...f..... 00:22:29.504 00000090 28 66 d3 57 72 32 2f 86 d2 db 36 05 a4 72 8e ef (f.Wr2/...6..r.. 00:22:29.504 000000a0 0d 1b 24 83 bf 44 26 a8 68 e0 24 1a 7b b3 40 b1 ..$..D&.h.$.{.@. 00:22:29.504 000000b0 ac ef e6 ce 1e 96 6b 1a 18 89 72 6e 0f 5b 38 49 ......k...rn.[8I 00:22:29.504 000000c0 7d b0 a0 cc 6f 59 c5 c1 a4 bd 2c 76 b4 44 fa fd }...oY....,v.D.. 00:22:29.504 000000d0 46 f9 36 cb 7d 25 67 27 3a 07 d1 cd 94 f8 c8 fb F.6.}%g':....... 00:22:29.504 000000e0 25 d4 d0 dc 5e e4 bf 76 bc 69 b8 84 fd d7 e9 c8 %...^..v.i...... 00:22:29.504 000000f0 14 89 64 a5 ce 7c 01 07 8c 7a 6a af 0e 83 d5 37 ..d..|...zj....7 00:22:29.504 00000100 e5 c1 5d dc c4 94 aa fe fa 35 5b d2 a5 3a 34 5b ..]......5[..:4[ 00:22:29.504 00000110 c2 69 9d 08 b4 c7 d7 36 dc 7a 81 42 d1 42 42 0c .i.....6.z.B.BB. 00:22:29.504 00000120 f2 e3 2f 2c df de 2e df 63 3d 1b 99 83 80 2f cb ../,....c=..../. 00:22:29.504 00000130 9c 1f 53 0a 86 0d 24 fb 16 46 1b 7f 1c b7 22 a9 ..S...$..F....". 00:22:29.504 00000140 3b b7 cf 14 e9 50 1b 7c 79 95 0e 96 c0 fd 6b f8 ;....P.|y.....k. 00:22:29.504 00000150 1d aa 5d ff 90 85 83 6c 6c 77 7e 7f 2c dd 7a 9f ..]....llw~.,.z. 00:22:29.504 00000160 1c 3b e6 cf 65 a7 42 8e bd d6 51 fe de d7 88 6a .;..e.B...Q....j 00:22:29.504 00000170 6d 0a ae 69 9f 83 2c 5c f5 93 e3 3a ab 79 f4 cf m..i..,\...:.y.. 00:22:29.504 00000180 23 30 68 f2 c1 a2 0c 86 63 a7 32 8c e7 19 59 c6 #0h.....c.2...Y. 00:22:29.504 00000190 5e 4a eb db 4e 91 be 5d f0 a5 25 21 db 21 c3 03 ^J..N..]..%!.!.. 00:22:29.504 000001a0 8e 75 2d da 5c ff ea 45 7e 2b fe fc b0 59 b2 e5 .u-.\..E~+...Y.. 00:22:29.504 000001b0 4f 18 af 6f 82 8c 97 48 a9 63 bf e6 81 3c 9f d6 O..o...H.c...<.. 00:22:29.504 000001c0 ea 9f 4d be 1b 3f e0 28 99 d0 b9 07 92 a7 24 6c ..M..?.(......$l 00:22:29.504 000001d0 ae a5 a6 75 06 91 9e 0f a9 8c a5 49 1e 4f 6e 1c ...u.......I.On. 00:22:29.504 000001e0 35 5f 37 91 82 1a 63 70 52 7d ce 93 2b e1 d5 b1 5_7...cpR}..+... 00:22:29.504 000001f0 02 64 78 b4 e0 f1 8b 42 b4 4c c8 f6 be 27 ac 61 .dx....B.L...'.a 00:22:29.504 00000200 80 82 7f 5b 83 f8 40 83 0e 2f ba 54 0e 63 b7 ba ...[..@../.T.c.. 00:22:29.504 00000210 ce ca c7 98 cf d8 51 c6 84 52 50 b3 3c 48 fd f5 ......Q..RP..x. 00:22:29.504 00000290 83 be b6 3c 5a ff df 23 ec b7 bf ae ab 29 f0 40 ...ww}........t. 00:22:29.505 00000240 da 95 1d 51 34 97 14 92 59 78 75 4d a1 d3 d0 b9 ...Q4...YxuM.... 00:22:29.505 00000250 7a 8a e9 d6 fb de 78 49 a8 10 00 e5 81 47 5a 9b z.....xI.....GZ. 00:22:29.505 00000260 7b fa d0 e3 1c a5 02 b6 8a b9 97 83 3d a4 38 9c {...........=.8. 00:22:29.505 00000270 8c 48 dd 0c 47 0b 2e a2 75 1d f6 93 27 0c 27 a3 .H..G...u...'.'. 00:22:29.505 00000280 9c eb 48 03 63 98 2c 60 de e5 10 15 70 ec 32 12 ..H.c.,`....p.2. 00:22:29.505 00000290 0b 04 68 1a a1 46 72 1a d2 01 80 b7 11 23 90 71 ..h..Fr......#.q 00:22:29.505 000002a0 55 31 67 e6 67 09 7b e9 45 ed 00 74 fa e6 7d c5 U1g.g.{.E..t..}. 00:22:29.505 000002b0 09 ef d9 cd 0c 63 6c 7f c6 ac af 25 07 2e aa 15 .....cl....%.... 00:22:29.505 000002c0 42 40 8d 53 87 0f 48 f4 ce 66 2f 99 01 ad 4a fa B@.S..H..f/...J. 00:22:29.505 000002d0 f8 8c 56 cc 6a 2f 68 de 71 07 f8 ab 75 49 0a 98 ..V.j/h.q...uI.. 00:22:29.505 000002e0 c2 83 96 2f ed 27 1b 02 3e ce 0e c0 78 69 01 d0 .../.'..>...xi.. 00:22:29.505 000002f0 b6 29 b4 93 33 a4 3c 27 93 54 07 06 eb eb a7 37 .)..3.<'.T.....7 00:22:29.505 dh secret: 00:22:29.505 00000000 b3 60 e0 7c e1 2e bb 0f a5 5f 85 94 74 a4 bb dd .`.|....._..t... 00:22:29.505 00000010 6c fc 89 7b de c4 2e f8 e1 b7 11 33 f3 97 aa bf l..{.......3.... 00:22:29.505 00000020 4e 61 ca 6b c2 82 dd 2b dd 02 3d 38 b6 0a 4b 61 Na.k...+..=8..Ka 00:22:29.505 00000030 da 66 81 63 80 f0 df 50 87 d3 fc 97 63 47 6d 71 .f.c...P....cGmq 00:22:29.505 00000040 c7 38 8f 7d c1 1b 52 f5 b7 35 a7 9c 07 e2 c9 e5 .8.}..R..5...... 00:22:29.505 00000050 44 7d 7d 9d 1e b5 ad 2b 8b 1a c8 94 28 7a 5a cc D}}....+....(zZ. 00:22:29.505 00000060 fc 8c bf 29 6b 8f 61 26 ee ee 48 ee f6 61 a7 20 ...)k.a&..H..a. 00:22:29.505 00000070 dd 22 d6 16 0e 14 08 15 16 95 55 cc b3 d0 02 66 ."........U....f 00:22:29.505 00000080 25 d0 c9 9d 28 58 ac 0c b4 2e 13 fc d3 d7 27 aa %...(X........'. 00:22:29.505 00000090 b2 b6 5f 63 b7 13 b2 be a5 58 55 33 da 80 c7 f3 .._c.....XU3.... 00:22:29.505 000000a0 f1 c1 b9 21 3b 72 b1 66 9b d8 28 d7 8d ac d7 b2 ...!;r.f..(..... 00:22:29.505 000000b0 70 ce 70 16 19 92 b7 ab 40 13 11 78 78 25 0e a7 p.p.....@..xx%.. 00:22:29.505 000000c0 b6 e7 60 b2 2f 29 37 ab ae 42 a0 6a 6f 09 70 41 ..`./)7..B.jo.pA 00:22:29.505 000000d0 56 15 13 47 36 68 66 b8 4f 6f 57 80 99 91 f2 a5 V..G6hf.OoW..... 00:22:29.505 000000e0 6c 71 cc c2 66 05 32 22 8a eb 92 26 29 48 47 06 lq..f.2"...&)HG. 00:22:29.505 000000f0 0a 43 f3 98 7a 13 8b 1b 7b ad 12 46 09 55 12 a7 .C..z...{..F.U.. 00:22:29.505 00000100 0f a0 2c 3e 24 6e 4a 06 53 90 d7 1e d3 3e 4f b2 ..,>$nJ.S....>O. 00:22:29.505 00000110 bd 93 18 a4 49 e6 88 86 05 f4 08 ba 91 e2 27 fc ....I.........'. 00:22:29.505 00000120 a3 97 7b 25 c9 16 37 da f1 df 0f e6 20 e7 cb e8 ..{%..7..... ... 00:22:29.505 00000130 eb ae a2 c3 a2 5f e1 c4 df da 60 c0 5d d5 89 4a ....._....`.]..J 00:22:29.505 00000140 00 9a c3 8c 90 d3 35 df 4b c9 59 bd 01 c9 58 84 ......5.K.Y...X. 00:22:29.505 00000150 42 8c e2 a4 93 62 c9 af b0 70 22 70 a2 a6 18 67 B....b...p"p...g 00:22:29.505 00000160 90 b3 11 26 51 49 6f 11 d7 1d 2d c9 78 3d 5d 3c ...&QIo...-.x=]< 00:22:29.505 00000170 4e 7e c4 34 47 eb 11 54 9a bb 5d 6a 8f c9 ad 82 N~.4G..T..]j.... 00:22:29.505 00000180 24 e4 7b 5d b1 63 f6 3d ba 28 c2 6a 0e c0 bf 6d $.{].c.=.(.j...m 00:22:29.505 00000190 c0 95 df f8 16 9d ea 4b 5b 53 1f 4b c1 23 ca 5a .......K[S.K.#.Z 00:22:29.505 000001a0 ef 7c 9f 01 26 d7 e9 5f 26 0f 9f 62 bd 77 bf 99 .|..&.._&..b.w.. 00:22:29.505 000001b0 50 26 5a 70 8a c7 f5 f3 3d 24 cd 53 f2 a2 83 61 P&Zp....=$.S...a 00:22:29.505 000001c0 e8 48 c7 7f cc 88 9e ab ec 1c 50 00 25 29 6f a8 .H........P.%)o. 00:22:29.505 000001d0 6f bd 26 53 3d cc 0c 55 16 47 1b ec 4f ab 03 92 o.&S=..U.G..O... 00:22:29.505 000001e0 17 08 4e df 56 5e 6d aa 86 d0 47 df d9 1b bd 26 ..N.V^m...G....& 00:22:29.505 000001f0 05 4d a0 56 88 a7 c2 90 bb f6 ce 6e f3 d7 9e 21 .M.V.......n...! 00:22:29.505 00000200 29 d3 f6 48 ea fc 1a b1 b8 25 85 eb d1 4a 2e 97 )..H.....%...J.. 00:22:29.505 00000210 f1 9c 9d e2 a3 82 25 9d 4a d1 53 b2 3a 00 b9 13 ......%.J.S.:... 00:22:29.505 00000220 a2 92 54 4d de 0b f7 8a 98 36 61 5f aa 62 d4 3d ..TM.....6a_.b.= 00:22:29.505 00000230 ac 42 02 e2 3d 23 94 10 88 d0 37 90 8a 50 1b d2 .B..=#....7..P.. 00:22:29.505 00000240 ed 8d 60 00 61 33 e8 ad 10 14 70 87 37 09 1a 95 ..`.a3....p.7... 00:22:29.505 00000250 d5 57 ea 98 6c 9a d5 df 74 ca d9 3a d8 df 94 9a .W..l...t..:.... 00:22:29.505 00000260 c1 0d 12 09 d3 cf 0b d9 43 1b 26 17 b3 10 4b 70 ........C.&...Kp 00:22:29.505 00000270 97 38 e9 c8 61 6e c3 e2 b9 dc b3 14 56 4f 2f af .8..an......VO/. 00:22:29.505 00000280 98 b3 40 c2 d6 95 b9 88 ae 0b d1 da d2 13 9c 0e ..@............. 00:22:29.505 00000290 f3 fe ce 00 d6 b1 af df 47 a7 7f 45 de 43 c6 ba ........G..E.C.. 00:22:29.505 000002a0 82 cb 50 92 02 60 df 87 f6 5a a1 7e 48 6e 59 8b ..P..`...Z.~HnY. 00:22:29.505 000002b0 9f dc a4 1c 33 36 19 5f 82 b2 75 38 c8 f0 d1 33 ....36._..u8...3 00:22:29.505 000002c0 78 ca bf 20 f2 35 8d 66 41 1e 9b 4e d8 af f8 a0 x.. .5.fA..N.... 00:22:29.505 000002d0 a9 9d 03 3e 5a 1e 03 82 fd 0b ee d8 ec 15 82 6e ...>Z..........n 00:22:29.505 000002e0 0f 11 54 f0 ec e4 18 00 98 98 f2 5b 4b 09 4f c3 ..T........[K.O. 00:22:29.505 000002f0 6d df e0 12 32 23 11 01 df 2b d1 94 50 2a 46 35 m...2#...+..P*F5 00:22:29.505 [2024-09-27 13:27:15.752562] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key0, hash=2, dhgroup=4, seq=3775755254, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.505 [2024-09-27 13:27:15.752884] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.505 [2024-09-27 13:27:15.803190] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.505 [2024-09-27 13:27:15.803589] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.505 [2024-09-27 13:27:15.803811] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.505 [2024-09-27 13:27:15.804038] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.505 [2024-09-27 13:27:15.938102] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.505 [2024-09-27 13:27:15.938326] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.505 [2024-09-27 13:27:15.938542] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.505 [2024-09-27 13:27:15.938718] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.505 [2024-09-27 13:27:15.939003] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.505 ctrlr pubkey: 00:22:29.505 00000000 fa df 55 7e ff f2 31 2d ef 23 d0 aa db cf ff 03 ..U~..1-.#...... 00:22:29.505 00000010 81 b4 02 64 54 c5 23 d8 a9 16 ef 0b ed 4f c2 1b ...dT.#......O.. 00:22:29.505 00000020 be e1 61 34 88 1e e2 35 e6 77 ff 10 38 4f 8d 2a ..a4...5.w..8O.* 00:22:29.505 00000030 94 37 aa ba 01 db c5 35 77 fc ac 4e 7a f4 36 a5 .7.....5w..Nz.6. 00:22:29.505 00000040 f0 1a ae 0f 4d 22 65 ca 31 59 66 7e 1c 4b 42 30 ....M"e.1Yf~.KB0 00:22:29.505 00000050 81 1d dc 9e 9a f2 73 b6 67 0a e6 a3 b0 8e ce 2e ......s.g....... 00:22:29.505 00000060 d7 5d a8 f0 01 9a 68 be 10 b3 b5 2c 54 b7 23 d7 .]....h....,T.#. 00:22:29.505 00000070 3b bf 2f 85 7f 73 c1 7d 03 01 52 af ca 0f 15 3f ;./..s.}..R....? 00:22:29.505 00000080 dc e7 f0 db 56 01 85 0d d5 3e f9 b8 49 e4 0d 91 ....V....>..I... 00:22:29.505 00000090 3e 0d 0b 21 0a 4f 0f f4 94 e3 0f 6b c3 3a cd d6 >..!.O.....k.:.. 00:22:29.505 000000a0 f5 ce ec e6 e1 cf 1a 7f ec 2d d0 19 03 f3 21 fc .........-....!. 00:22:29.505 000000b0 25 41 22 c9 90 c2 46 d8 9d 6a a8 3a a9 38 ab 1e %A"...F..j.:.8.. 00:22:29.505 000000c0 95 5d 28 f3 f3 0d 59 5e 32 e4 aa aa 71 fd b6 64 .](...Y^2...q..d 00:22:29.505 000000d0 37 86 48 43 68 09 7c 4c 73 cd fd 9e e2 6a 9f f3 7.HCh.|Ls....j.. 00:22:29.505 000000e0 b5 5a 11 34 af 37 ba 6b 52 cd 3e c3 aa e3 ab fd .Z.4.7.kR.>..... 00:22:29.505 000000f0 57 aa be 12 92 f4 eb 57 eb 7b 4f cb 8f 66 6b 66 W......W.{O..fkf 00:22:29.505 00000100 4d 85 a7 bd 56 2a e0 e8 db 6f dd df cc 74 7e 09 M...V*...o...t~. 00:22:29.505 00000110 d5 e5 ed c3 bc 53 a2 78 3e 07 0a cc 98 1b 05 64 .....S.x>......d 00:22:29.505 00000120 0a c2 11 ef 98 e9 96 ef c3 ce 7f 16 cf c0 ed ac ................ 00:22:29.506 00000130 0e 11 f6 e5 05 ef 77 21 35 2d 8a 39 0d 70 66 25 ......w!5-.9.pf% 00:22:29.506 00000140 ad 13 de 3e 8f 85 16 bf 39 61 28 66 be 34 8b 32 ...>....9a(f.4.2 00:22:29.506 00000150 0a 82 20 4c a4 22 eb 9b 58 1f 83 81 b5 d0 df 80 .. L."..X....... 00:22:29.506 00000160 56 b8 e6 73 05 17 5e 8f dc c6 ad f6 ef 4e 25 9c V..s..^......N%. 00:22:29.506 00000170 8f d0 98 3a c3 03 b4 7c 08 34 f1 9b 19 1e 11 e6 ...:...|.4...... 00:22:29.506 00000180 14 62 4e c9 41 97 d6 e2 74 29 5d fc 0f c4 89 ae .bN.A...t)]..... 00:22:29.506 00000190 df a1 11 32 c7 8b f4 d4 94 f0 e6 3e 40 66 47 91 ...2.......>@fG. 00:22:29.506 000001a0 4a 8b a0 30 54 fb f2 a5 c7 4e 9a 14 0c 50 7d 84 J..0T....N...P}. 00:22:29.506 000001b0 e6 db c3 b3 05 c5 93 f1 6f b7 1d 83 9a 0f 01 31 ........o......1 00:22:29.506 000001c0 47 89 0a 9b ea 29 44 42 f7 7f c2 93 b3 0c 97 d6 G....)DB........ 00:22:29.506 000001d0 96 79 60 7b fc b0 0c f9 c0 15 36 73 6e bd 8a 67 .y`{......6sn..g 00:22:29.506 000001e0 b1 d4 60 af 0f 1d 02 36 9b 50 3b 82 bf f5 ff a4 ..`....6.P;..... 00:22:29.506 000001f0 9d a6 bf 1a 62 4d 9e b1 f5 f0 8c 5a b0 cc e5 6c ....bM.....Z...l 00:22:29.506 00000200 f2 07 13 fc 8d 9b 84 79 61 be 8c 0c 2e 05 c0 1a .......ya....... 00:22:29.506 00000210 8f 8b 94 d1 e4 4e ca 37 5b 20 54 58 c3 e9 c7 95 .....N.7[ TX.... 00:22:29.506 00000220 24 ca 42 47 8a 1a c8 53 b3 63 bc 82 40 eb 51 d2 $.BG...S.c..@.Q. 00:22:29.506 00000230 35 9c 2a f7 21 04 48 07 78 89 0f 72 4c 3a 11 5b 5.*.!.H.x..rL:.[ 00:22:29.506 00000240 ed 72 c5 7f ff 73 21 55 3f 0c a4 f1 f4 28 47 50 .r...s!U?....(GP 00:22:29.506 00000250 dc 85 e4 55 f9 d9 af 09 30 73 91 f9 7e 51 b9 09 ...U....0s..~Q.. 00:22:29.506 00000260 65 9b 99 58 d5 06 e3 25 ee c0 35 44 f6 66 a2 ca e..X...%..5D.f.. 00:22:29.506 00000270 f1 8a c9 f8 77 3c e4 a2 bd 14 e4 8b c7 60 98 ae ....w<.......`.. 00:22:29.506 00000280 3a 0a da 36 c2 67 e2 56 d1 d2 96 d9 67 ac 0e 5d :..6.g.V....g..] 00:22:29.506 00000290 dd 29 1c 1c d2 68 d1 a9 b0 c3 11 68 de 04 90 be .)...h.....h.... 00:22:29.506 000002a0 df 68 2a a4 99 f3 dd 57 82 1a 94 c1 5f b5 f7 1d .h*....W...._... 00:22:29.506 000002b0 9f 16 07 f1 bc 36 59 72 50 bb cc bd e6 fa b1 5c .....6YrP......\ 00:22:29.506 000002c0 78 5d f9 76 a7 24 56 86 61 7a b1 3b 6f c6 dd 0d x].v.$V.az.;o... 00:22:29.506 000002d0 9f 4a 86 ac 3f 9a 19 ff 70 3c e7 12 cd 80 ac 74 .J..?...p<.....t 00:22:29.506 000002e0 9c e2 01 3e fd 5c 9d ef 75 1a 7a 1b 2b f3 19 8f ...>.\..u.z.+... 00:22:29.506 000002f0 f1 23 25 80 f3 70 f7 82 48 2c 9e f4 65 e0 7d c2 .#%..p..H,..e.}. 00:22:29.506 host pubkey: 00:22:29.506 00000000 8b fc a6 38 55 59 84 35 08 24 fe 35 8c da 20 d5 ...8UY.5.$.5.. . 00:22:29.506 00000010 45 12 f9 8c 6b 4a 7c 73 78 c0 a2 a9 f0 f4 58 e3 E...kJ|sx.....X. 00:22:29.506 00000020 13 aa 45 e8 26 48 d9 06 ea df e0 c9 ba 6a dd a8 ..E.&H.......j.. 00:22:29.506 00000030 0a 5e 12 ad 08 25 cd b2 15 f9 3a f3 91 8a f8 26 .^...%....:....& 00:22:29.506 00000040 4c c2 dc 48 1d dc 4a d4 ed bf 2c 72 f5 36 b6 8b L..H..J...,r.6.. 00:22:29.506 00000050 62 79 06 38 c1 fd 30 63 56 d0 04 c4 b6 19 38 80 by.8..0cV.....8. 00:22:29.506 00000060 ff 7a 59 06 27 ea 77 77 72 ba 18 d6 b2 38 c3 8d .zY.'.wwr....8.. 00:22:29.506 00000070 78 c0 5a f4 2a c8 09 3a 7b 71 25 8e 45 84 aa 3c x.Z.*..:{q%.E..< 00:22:29.506 00000080 31 8d 4f 4a 26 67 2a a4 59 e0 ce b6 b0 f1 8e 26 1.OJ&g*.Y......& 00:22:29.506 00000090 57 01 a0 bd 94 eb 7d 06 2b 2d d9 5e e2 49 b8 3d W.....}.+-.^.I.= 00:22:29.506 000000a0 d9 8c 58 5c 95 d0 00 a8 40 fb b7 93 dc 22 ea 65 ..X\....@....".e 00:22:29.506 000000b0 f2 e6 5b 4e 8b 45 ef cc fa 31 f2 22 3b 70 9f 4c ..[N.E...1.";p.L 00:22:29.506 000000c0 97 e6 16 df 9d 82 e6 48 cb b7 0d 12 b0 b1 d8 0f .......H........ 00:22:29.506 000000d0 2e 40 ea f1 0e 9d 68 b2 a2 95 da 10 7b 6e ce c4 .@....h.....{n.. 00:22:29.506 000000e0 7f 2d 6d 51 db e1 c9 1e 4c 71 31 80 50 a4 90 a4 .-mQ....Lq1.P... 00:22:29.506 000000f0 ae 38 06 6b 04 bd b5 41 10 3b 55 c4 d8 80 14 55 .8.k...A.;U....U 00:22:29.506 00000100 d5 a4 35 c2 31 2d 0b 88 11 74 b9 86 eb f8 31 f5 ..5.1-...t....1. 00:22:29.506 00000110 76 50 09 a2 2b 6d bd ab 49 6f cb 17 f2 99 cd da vP..+m..Io...... 00:22:29.506 00000120 ca 26 06 1a bb 56 fd 38 8f d1 71 63 35 19 94 77 .&...V.8..qc5..w 00:22:29.506 00000130 18 1a d3 6b 69 7c 9e 1b e8 d7 d3 fd 08 b7 5a f9 ...ki|........Z. 00:22:29.506 00000140 b4 1b 0b 8f c7 bd f5 1f 05 74 ab bb f4 04 7e 74 .........t....~t 00:22:29.506 00000150 8d 1f e7 34 a0 3b 4b d5 24 85 af bf 15 2a cd 48 ...4.;K.$....*.H 00:22:29.506 00000160 cf 08 24 18 13 c7 df 45 67 a2 fe 05 df c1 3e b4 ..$....Eg.....>. 00:22:29.506 00000170 73 fe cc 18 8f a2 c4 fd 22 be 3b 5b 00 fa 2a 8c s.......".;[..*. 00:22:29.506 00000180 d1 d0 d4 a6 2f 76 c9 01 d7 60 9c 56 1f bc 10 d4 ..../v...`.V.... 00:22:29.506 00000190 0d b1 3f 66 ad 1a 54 f6 02 8a 06 41 da 27 a8 de ..?f..T....A.'.. 00:22:29.506 000001a0 65 43 45 dd 27 a7 72 b3 a9 a3 ea 1f 04 fe 80 8a eCE.'.r......... 00:22:29.506 000001b0 ba c4 b9 fa a4 cb 34 4f 45 56 c5 9b 12 30 d8 45 ......4OEV...0.E 00:22:29.506 000001c0 84 4e 8d 12 04 75 da 46 c8 60 22 fc a9 85 ce 32 .N...u.F.`"....2 00:22:29.506 000001d0 7f 75 c8 98 55 71 e0 a3 6c 19 5b 48 fc 9e 96 9f .u..Uq..l.[H.... 00:22:29.506 000001e0 3a e9 78 14 dc 31 f9 36 65 fc e3 3d 86 2d b1 c5 :.x..1.6e..=.-.. 00:22:29.506 000001f0 af 0e 9b 50 11 d0 42 f9 09 5e 0a f3 40 27 93 f1 ...P..B..^..@'.. 00:22:29.506 00000200 fc 18 2b 55 7c dd c8 a4 d5 5d 43 b9 5a 15 31 8a ..+U|....]C.Z.1. 00:22:29.506 00000210 6f a4 5a 7d f2 75 f0 11 f1 57 e6 93 79 14 4b 5c o.Z}.u...W..y.K\ 00:22:29.506 00000220 18 14 fd 82 d4 a8 e3 0f c2 1a 9b af ca 36 7f 3f .............6.? 00:22:29.506 00000230 12 32 f5 04 93 1e a9 7a e6 62 f6 ef 80 bd 86 07 .2.....z.b...... 00:22:29.506 00000240 79 5f e3 37 10 7e 86 2c be 66 ff 94 4c 92 68 78 y_.7.~.,.f..L.hx 00:22:29.506 00000250 97 88 b5 db 8d 44 13 12 bf 3d 94 45 55 5e 00 bf .....D...=.EU^.. 00:22:29.506 00000260 61 31 36 9c 62 6f f2 d4 7a 0a 87 18 ab 77 38 3f a16.bo..z....w8? 00:22:29.506 00000270 ac 4d b5 e1 41 39 1d 53 03 35 ec 27 f9 97 2d 7f .M..A9.S.5.'..-. 00:22:29.506 00000280 77 78 f9 f9 d2 66 ed 43 cd b8 be c3 d4 b4 73 3f wx...f.C......s? 00:22:29.506 00000290 17 88 75 af 63 48 36 ed db 27 43 10 56 3e 23 8d ..u.cH6..'C.V>#. 00:22:29.506 000002a0 84 6e 80 f5 60 8a 14 06 82 54 81 76 a8 e2 04 41 .n..`....T.v...A 00:22:29.506 000002b0 0e 91 74 f0 19 ed 14 e4 e3 c5 e4 e6 e9 d7 8b db ..t............. 00:22:29.506 000002c0 bb 6e 78 01 58 83 01 19 c8 34 c7 98 f0 e7 6a 5d .nx.X....4....j] 00:22:29.506 000002d0 4f 78 30 ea 88 72 89 0b c3 bc 2b 59 cb ae de 93 Ox0..r....+Y.... 00:22:29.506 000002e0 a6 7a f4 2f 18 81 e1 05 3c 62 46 ff 3d c0 ef 5d .z./........a= 00:22:29.506 000000e0 14 8d ec 3b e8 93 c0 b8 04 d1 0a c0 a6 52 2b e0 ...;.........R+. 00:22:29.506 000000f0 20 ca 3a 56 f8 b0 86 09 d8 29 b9 61 7b bb 12 5e .:V.....).a{..^ 00:22:29.506 00000100 e1 61 ce 86 ff a3 56 03 eb e2 3f 30 b6 3b 1b fe .a....V...?0.;.. 00:22:29.506 00000110 42 6d 18 3e 23 e6 ed 12 15 ac 25 2b 67 42 e2 0b Bm.>#.....%+gB.. 00:22:29.506 00000120 4f f8 d2 50 f7 8b c5 b9 d6 7b 27 98 82 bd 0a 38 O..P.....{'....8 00:22:29.506 00000130 3b 0c 02 af 03 f4 c7 57 be 59 5f 39 3e 15 08 ea ;......W.Y_9>... 00:22:29.506 00000140 73 f9 0e 5c cf 69 b4 5e 97 5b ad 1a af b2 7a 19 s..\.i.^.[....z. 00:22:29.506 00000150 ba f1 28 2f ef c3 55 f9 87 94 22 22 11 c9 16 55 ..(/..U...""...U 00:22:29.506 00000160 b1 36 55 c2 72 3c 0d 66 38 1b f7 48 af c8 fa e6 .6U.r<.f8..H.... 00:22:29.506 00000170 4a 18 53 f1 45 01 33 7c 2e 32 1d e9 09 ea 5e 69 J.S.E.3|.2....^i 00:22:29.506 00000180 bc 38 6d 29 35 34 07 2b 7c 98 82 eb fb 9b 26 0c .8m)54.+|.....&. 00:22:29.506 00000190 fb 13 a5 05 2c b4 90 39 ad 44 eb e8 a0 82 79 f2 ....,..9.D....y. 00:22:29.506 000001a0 02 de e3 9c f2 0e fd c8 b6 49 37 55 ca 07 4d e9 .........I7U..M. 00:22:29.506 000001b0 4e 5f 0f b2 83 73 da 4a ca cd 30 a1 66 1e d4 6c N_...s.J..0.f..l 00:22:29.506 000001c0 f8 37 48 2d d7 6d c0 9f 46 63 36 37 4c d1 80 0d .7H-.m..Fc67L... 00:22:29.506 000001d0 9a 0b 49 eb 58 b8 60 d6 4a b9 dd 64 ac f2 62 d1 ..I.X.`.J..d..b. 00:22:29.506 000001e0 81 cf 81 7a 81 fb b1 96 ee 9b ea c8 4d 88 a3 3d ...z........M..= 00:22:29.506 000001f0 db eb 48 a6 38 89 ae 3e 3b 48 35 e5 62 c4 9c 64 ..H.8..>;H5.b..d 00:22:29.506 00000200 3e 12 71 b8 87 c9 79 c7 f0 a6 60 6a 8a c9 d2 0a >.q...y...`j.... 00:22:29.506 00000210 6d be 29 35 b8 47 59 18 4b 19 38 13 57 da 5b 06 m.)5.GY.K.8.W.[. 00:22:29.506 00000220 83 69 55 34 0e 3d bb da ba bb 93 d6 6e 9f e9 1d .iU4.=......n... 00:22:29.506 00000230 e3 90 a8 50 77 dd 98 ce 4f 61 44 b9 65 9a 03 d3 ...Pw...OaD.e... 00:22:29.506 00000240 6a eb 0f 03 3d b8 d9 92 18 a1 68 9b f9 80 29 22 j...=.....h...)" 00:22:29.506 00000250 83 62 f2 d0 ab c0 59 56 34 b5 cb 03 5f 5f 20 69 .b....YV4...__ i 00:22:29.506 00000260 31 43 79 c8 50 14 55 7d bc 32 ca a9 e5 bf 88 69 1Cy.P.U}.2.....i 00:22:29.506 00000270 6b fa 2c a2 61 7c 79 20 7a e4 83 55 f8 a6 69 3b k.,.a|y z..U..i; 00:22:29.506 00000280 67 84 3b 5f 9e 45 b5 c5 9e f5 9c bd 70 ac c3 e9 g.;_.E......p... 00:22:29.506 00000290 c3 37 3d 1e ae b7 59 6d 60 23 90 df cf d6 e4 15 .7=...Ym`#...... 00:22:29.506 000002a0 dc 3b ad ba e4 e8 06 8e 5c a3 da ca 02 4a e9 3e .;......\....J.> 00:22:29.506 000002b0 47 d7 93 f3 7f da 30 d2 66 5b b9 1b 66 50 57 36 G.....0.f[..fPW6 00:22:29.506 000002c0 84 de 06 0a 42 73 4a 3b c2 d1 d3 46 b6 94 b9 ff ....BsJ;...F.... 00:22:29.506 000002d0 e3 08 cd 84 4d 95 36 26 7b c4 14 85 96 50 66 22 ....M.6&{....Pf" 00:22:29.507 000002e0 a8 bd 62 76 4b ea 73 7d 50 33 c4 e3 e0 45 40 e0 ..bvK.s}P3...E@. 00:22:29.507 000002f0 d8 36 f2 49 9e 5d 75 55 5b b1 7d c0 1f 9e e5 d7 .6.I.]uU[.}..... 00:22:29.507 [2024-09-27 13:27:16.011131] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=2, dhgroup=4, seq=3775755255, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.507 [2024-09-27 13:27:16.011451] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.507 [2024-09-27 13:27:16.065758] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.507 [2024-09-27 13:27:16.066109] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.507 [2024-09-27 13:27:16.066317] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.507 [2024-09-27 13:27:16.066557] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.507 [2024-09-27 13:27:16.117355] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.507 [2024-09-27 13:27:16.117598] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.507 [2024-09-27 13:27:16.117748] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.507 [2024-09-27 13:27:16.117917] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.507 [2024-09-27 13:27:16.118126] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.507 ctrlr pubkey: 00:22:29.507 00000000 fa df 55 7e ff f2 31 2d ef 23 d0 aa db cf ff 03 ..U~..1-.#...... 00:22:29.507 00000010 81 b4 02 64 54 c5 23 d8 a9 16 ef 0b ed 4f c2 1b ...dT.#......O.. 00:22:29.507 00000020 be e1 61 34 88 1e e2 35 e6 77 ff 10 38 4f 8d 2a ..a4...5.w..8O.* 00:22:29.507 00000030 94 37 aa ba 01 db c5 35 77 fc ac 4e 7a f4 36 a5 .7.....5w..Nz.6. 00:22:29.507 00000040 f0 1a ae 0f 4d 22 65 ca 31 59 66 7e 1c 4b 42 30 ....M"e.1Yf~.KB0 00:22:29.507 00000050 81 1d dc 9e 9a f2 73 b6 67 0a e6 a3 b0 8e ce 2e ......s.g....... 00:22:29.507 00000060 d7 5d a8 f0 01 9a 68 be 10 b3 b5 2c 54 b7 23 d7 .]....h....,T.#. 00:22:29.507 00000070 3b bf 2f 85 7f 73 c1 7d 03 01 52 af ca 0f 15 3f ;./..s.}..R....? 00:22:29.507 00000080 dc e7 f0 db 56 01 85 0d d5 3e f9 b8 49 e4 0d 91 ....V....>..I... 00:22:29.507 00000090 3e 0d 0b 21 0a 4f 0f f4 94 e3 0f 6b c3 3a cd d6 >..!.O.....k.:.. 00:22:29.507 000000a0 f5 ce ec e6 e1 cf 1a 7f ec 2d d0 19 03 f3 21 fc .........-....!. 00:22:29.507 000000b0 25 41 22 c9 90 c2 46 d8 9d 6a a8 3a a9 38 ab 1e %A"...F..j.:.8.. 00:22:29.507 000000c0 95 5d 28 f3 f3 0d 59 5e 32 e4 aa aa 71 fd b6 64 .](...Y^2...q..d 00:22:29.507 000000d0 37 86 48 43 68 09 7c 4c 73 cd fd 9e e2 6a 9f f3 7.HCh.|Ls....j.. 00:22:29.507 000000e0 b5 5a 11 34 af 37 ba 6b 52 cd 3e c3 aa e3 ab fd .Z.4.7.kR.>..... 00:22:29.507 000000f0 57 aa be 12 92 f4 eb 57 eb 7b 4f cb 8f 66 6b 66 W......W.{O..fkf 00:22:29.507 00000100 4d 85 a7 bd 56 2a e0 e8 db 6f dd df cc 74 7e 09 M...V*...o...t~. 00:22:29.507 00000110 d5 e5 ed c3 bc 53 a2 78 3e 07 0a cc 98 1b 05 64 .....S.x>......d 00:22:29.507 00000120 0a c2 11 ef 98 e9 96 ef c3 ce 7f 16 cf c0 ed ac ................ 00:22:29.507 00000130 0e 11 f6 e5 05 ef 77 21 35 2d 8a 39 0d 70 66 25 ......w!5-.9.pf% 00:22:29.507 00000140 ad 13 de 3e 8f 85 16 bf 39 61 28 66 be 34 8b 32 ...>....9a(f.4.2 00:22:29.507 00000150 0a 82 20 4c a4 22 eb 9b 58 1f 83 81 b5 d0 df 80 .. L."..X....... 00:22:29.507 00000160 56 b8 e6 73 05 17 5e 8f dc c6 ad f6 ef 4e 25 9c V..s..^......N%. 00:22:29.507 00000170 8f d0 98 3a c3 03 b4 7c 08 34 f1 9b 19 1e 11 e6 ...:...|.4...... 00:22:29.507 00000180 14 62 4e c9 41 97 d6 e2 74 29 5d fc 0f c4 89 ae .bN.A...t)]..... 00:22:29.507 00000190 df a1 11 32 c7 8b f4 d4 94 f0 e6 3e 40 66 47 91 ...2.......>@fG. 00:22:29.507 000001a0 4a 8b a0 30 54 fb f2 a5 c7 4e 9a 14 0c 50 7d 84 J..0T....N...P}. 00:22:29.507 000001b0 e6 db c3 b3 05 c5 93 f1 6f b7 1d 83 9a 0f 01 31 ........o......1 00:22:29.507 000001c0 47 89 0a 9b ea 29 44 42 f7 7f c2 93 b3 0c 97 d6 G....)DB........ 00:22:29.507 000001d0 96 79 60 7b fc b0 0c f9 c0 15 36 73 6e bd 8a 67 .y`{......6sn..g 00:22:29.507 000001e0 b1 d4 60 af 0f 1d 02 36 9b 50 3b 82 bf f5 ff a4 ..`....6.P;..... 00:22:29.507 000001f0 9d a6 bf 1a 62 4d 9e b1 f5 f0 8c 5a b0 cc e5 6c ....bM.....Z...l 00:22:29.507 00000200 f2 07 13 fc 8d 9b 84 79 61 be 8c 0c 2e 05 c0 1a .......ya....... 00:22:29.507 00000210 8f 8b 94 d1 e4 4e ca 37 5b 20 54 58 c3 e9 c7 95 .....N.7[ TX.... 00:22:29.507 00000220 24 ca 42 47 8a 1a c8 53 b3 63 bc 82 40 eb 51 d2 $.BG...S.c..@.Q. 00:22:29.507 00000230 35 9c 2a f7 21 04 48 07 78 89 0f 72 4c 3a 11 5b 5.*.!.H.x..rL:.[ 00:22:29.507 00000240 ed 72 c5 7f ff 73 21 55 3f 0c a4 f1 f4 28 47 50 .r...s!U?....(GP 00:22:29.507 00000250 dc 85 e4 55 f9 d9 af 09 30 73 91 f9 7e 51 b9 09 ...U....0s..~Q.. 00:22:29.507 00000260 65 9b 99 58 d5 06 e3 25 ee c0 35 44 f6 66 a2 ca e..X...%..5D.f.. 00:22:29.507 00000270 f1 8a c9 f8 77 3c e4 a2 bd 14 e4 8b c7 60 98 ae ....w<.......`.. 00:22:29.507 00000280 3a 0a da 36 c2 67 e2 56 d1 d2 96 d9 67 ac 0e 5d :..6.g.V....g..] 00:22:29.507 00000290 dd 29 1c 1c d2 68 d1 a9 b0 c3 11 68 de 04 90 be .)...h.....h.... 00:22:29.507 000002a0 df 68 2a a4 99 f3 dd 57 82 1a 94 c1 5f b5 f7 1d .h*....W...._... 00:22:29.507 000002b0 9f 16 07 f1 bc 36 59 72 50 bb cc bd e6 fa b1 5c .....6YrP......\ 00:22:29.507 000002c0 78 5d f9 76 a7 24 56 86 61 7a b1 3b 6f c6 dd 0d x].v.$V.az.;o... 00:22:29.507 000002d0 9f 4a 86 ac 3f 9a 19 ff 70 3c e7 12 cd 80 ac 74 .J..?...p<.....t 00:22:29.507 000002e0 9c e2 01 3e fd 5c 9d ef 75 1a 7a 1b 2b f3 19 8f ...>.\..u.z.+... 00:22:29.507 000002f0 f1 23 25 80 f3 70 f7 82 48 2c 9e f4 65 e0 7d c2 .#%..p..H,..e.}. 00:22:29.507 host pubkey: 00:22:29.507 00000000 a1 cb c2 e4 2a 7c c8 de cc 21 c1 ac c2 4d 48 0b ....*|...!...MH. 00:22:29.507 00000010 d6 8b c3 91 56 af ec 6c 07 7f eb 20 9a bc da 5c ....V..l... ...\ 00:22:29.507 00000020 21 5f 3a 61 99 53 9f 5c de 4b 1f a1 9d d7 da ba !_:a.S.\.K...... 00:22:29.507 00000030 ad 94 56 18 22 7f db 74 37 57 be 78 52 0a 4c e2 ..V."..t7W.xR.L. 00:22:29.507 00000040 32 76 77 78 a8 e4 50 59 d5 f4 04 2c 60 08 26 ba 2vwx..PY...,`.&. 00:22:29.507 00000050 3d d0 a5 ee 03 35 49 8e dd d4 bb b5 73 49 48 fe =....5I.....sIH. 00:22:29.507 00000060 a3 72 6b d1 ec ce 98 2f 2a 73 51 d0 f6 98 d9 d3 .rk..../*sQ..... 00:22:29.507 00000070 42 9e f3 75 78 80 42 87 aa 33 ba 3b 91 b8 e2 97 B..ux.B..3.;.... 00:22:29.507 00000080 f0 ff 3d 81 40 eb 3e 15 98 be 39 10 d3 22 7e db ..=.@.>...9.."~. 00:22:29.507 00000090 a7 58 76 fa 7b d1 f9 67 86 c0 0d 87 d8 ea f5 fb .Xv.{..g........ 00:22:29.507 000000a0 f3 62 1d 5b a7 a3 0b 02 bb ad c5 6f 86 f2 0c 81 .b.[.......o.... 00:22:29.507 000000b0 5a c8 1e 10 de 0f ed ee 6a 4c 9e 56 db d5 7b 79 Z.......jL.V..{y 00:22:29.507 000000c0 59 07 4d 91 3c 12 86 06 7b af 6a 98 78 c8 40 4b Y.M.<...{.j.x.@K 00:22:29.507 000000d0 a1 06 1b 83 23 cf 6b e2 6f 1b f0 b2 73 7a 7a e3 ....#.k.o...szz. 00:22:29.507 000000e0 63 36 96 70 4b 23 3c 20 22 a2 75 ef 29 4b ee e4 c6.pK#< ".u.)K.. 00:22:29.507 000000f0 31 f4 59 30 52 a1 74 7f 2a 0f 9f fb 38 47 87 91 1.Y0R.t.*...8G.. 00:22:29.507 00000100 e5 f8 d0 68 1a 3e 1f 30 3e 97 e8 c9 23 f1 54 e6 ...h.>.0>...#.T. 00:22:29.507 00000110 c8 85 69 77 74 7d 84 86 55 b5 bf 15 56 6f 54 80 ..iwt}..U...VoT. 00:22:29.507 00000120 34 03 f3 b8 88 66 54 7c 34 f1 ab d2 d8 fb 26 b3 4....fT|4.....&. 00:22:29.507 00000130 b6 28 a6 28 f0 d0 c3 c5 52 7a 0c df 6b 3a d8 38 .(.(....Rz..k:.8 00:22:29.507 00000140 32 aa 80 30 7b 6b f5 a4 68 6c 3b d2 fd b5 6e 93 2..0{k..hl;...n. 00:22:29.507 00000150 14 0e e5 39 57 c9 38 22 cc 82 d9 2f fe a1 1f 4d ...9W.8".../...M 00:22:29.507 00000160 e4 ef 48 14 e9 b3 b3 51 db d4 1a 04 fc ad be 38 ..H....Q.......8 00:22:29.507 00000170 48 a8 0a 47 de 82 d8 ad ef 80 23 54 76 e9 91 6d H..G......#Tv..m 00:22:29.507 00000180 4a 4a 44 21 7e 2b ff 07 b3 92 cb 16 10 97 46 b0 JJD!~+........F. 00:22:29.507 00000190 44 72 57 f6 80 ec b8 d6 7d 3b 42 54 e5 33 81 d9 DrW.....};BT.3.. 00:22:29.507 000001a0 83 ad 0f ec 5e 5f 18 b4 25 65 a6 db 4b 65 e4 f0 ....^_..%e..Ke.. 00:22:29.507 000001b0 01 8e 98 0e 3f fc 19 73 9a 1b 60 a6 d3 db 44 20 ....?..s..`...D 00:22:29.507 000001c0 a9 85 1d 50 48 f4 22 57 c3 56 6f b1 30 d7 06 94 ...PH."W.Vo.0... 00:22:29.507 000001d0 02 7f df c0 5e fd 80 30 4f 6d cd 88 9f 2c c6 0b ....^..0Om...,.. 00:22:29.507 000001e0 f5 9d c3 22 fd 08 cb 11 79 f9 cd 32 6f 7a b8 86 ..."....y..2oz.. 00:22:29.507 000001f0 14 c3 45 db 90 12 12 61 f4 43 d5 2f 1f 1b 70 23 ..E....a.C./..p# 00:22:29.507 00000200 f5 b3 d9 7d a2 18 74 8e 69 8b 20 65 01 a4 9e 59 ...}..t.i. e...Y 00:22:29.507 00000210 18 3a e3 f1 53 89 5e d1 03 e3 c9 94 f4 2c 91 9a .:..S.^......,.. 00:22:29.507 00000220 2d fb fa 02 f0 8e 90 fe 5e 21 cc 21 91 e3 3e 90 -.......^!.!..>. 00:22:29.507 00000230 7d e3 d9 40 df 70 60 69 e4 d0 95 3a b2 a2 77 06 }..@.p`i...:..w. 00:22:29.507 00000240 80 9e 30 14 71 d8 cc f5 42 17 7f 0a 03 dd d4 0a ..0.q...B....... 00:22:29.507 00000250 8f 61 a8 12 22 81 20 3b 0b 2c ca b5 3b da dc 95 .a..". ;.,..;... 00:22:29.507 00000260 fd 43 d1 73 35 bd 2e a6 97 73 f1 ba e7 58 ab 0a .C.s5....s...X.. 00:22:29.507 00000270 11 01 f5 36 1d 39 f9 69 96 93 bb ae 72 32 d0 fc ...6.9.i....r2.. 00:22:29.507 00000280 be 6d 38 37 4b 44 08 71 a4 9e 54 26 b3 52 c5 a6 .m87KD.q..T&.R.. 00:22:29.507 00000290 63 02 61 66 f7 ef ac 59 cf 9c 7d a4 54 6c dc c9 c.af...Y..}.Tl.. 00:22:29.507 000002a0 d8 a1 7b dc 38 c8 6a b7 55 f8 5e 40 dd 8b 6a ff ..{.8.j.U.^@..j. 00:22:29.507 000002b0 32 a9 5e 06 e2 2b d3 03 d2 75 02 16 a8 ee 76 22 2.^..+...u....v" 00:22:29.507 000002c0 4a e9 42 3e 53 0c 8d 94 4a a7 2c 4b ec 4d 1f f7 J.B>S...J.,K.M.. 00:22:29.507 000002d0 c9 fd 72 2c 2f 07 55 73 9b d4 22 8c 0c 49 27 e9 ..r,/.Us.."..I'. 00:22:29.507 000002e0 dc 37 5e b4 a8 49 90 de 36 5b 2e dc f7 02 a3 71 .7^..I..6[.....q 00:22:29.507 000002f0 47 46 d4 a6 49 2d 29 de 12 c8 f2 2d ed 8f 27 85 GF..I-)....-..'. 00:22:29.507 dh secret: 00:22:29.507 00000000 9b 57 62 3c 4a ea d8 c2 cc 8e 89 d5 ce 2f 70 cb .Wbi|z...M\47...T 00:22:29.507 00000030 96 f3 e3 85 91 7c 27 2d 9e 98 f8 ba 68 3d 7f 55 .....|'-....h=.U 00:22:29.507 00000040 d3 2d 04 ac d5 cc f2 de d0 10 0a 12 e2 dc 1f df .-.............. 00:22:29.507 00000050 01 b1 a1 5f d8 cd cf 50 e9 98 e5 10 e3 73 38 dd ..._...P.....s8. 00:22:29.507 00000060 91 f0 10 2e 58 bb ba ad 48 a2 88 8b 0e 0b 34 6e ....X...H.....4n 00:22:29.507 00000070 59 6d 95 3f 41 56 fa 0f 80 c7 00 19 d6 95 ca 94 Ym.?AV.......... 00:22:29.507 00000080 eb 05 3f 94 d3 d9 7b cb 07 2c fe 32 3a bb fd 83 ..?...{..,.2:... 00:22:29.507 00000090 e7 5c 13 c4 50 3d 91 fe c2 12 16 77 f2 04 b5 bb .\..P=.....w.... 00:22:29.507 000000a0 19 7f 5d aa 26 78 93 a9 7a 0b a5 d8 fb 9d 4b 6b ..].&x..z.....Kk 00:22:29.507 000000b0 49 a9 30 77 a2 19 26 80 98 3d 38 6e 26 59 f2 57 I.0w..&..=8n&Y.W 00:22:29.507 000000c0 ca 8e 3e e3 60 93 58 5c dd b6 a9 63 47 93 07 ce ..>.`.X\...cG... 00:22:29.507 000000d0 fe 16 45 01 47 6d b5 ea 1c c9 cf 54 5f cb 99 b0 ..E.Gm.....T_... 00:22:29.507 000000e0 20 ba 6b e0 7d 0e 5d 63 e6 d7 7a 70 d9 db 70 73 .k.}.]c..zp..ps 00:22:29.507 000000f0 67 7f d0 ed 14 cd 6d 8b 52 c8 77 90 83 69 05 bd g.....m.R.w..i.. 00:22:29.507 00000100 b0 7f a5 6b 41 19 29 ac d5 4c 76 09 ea dc b1 af ...kA.)..Lv..... 00:22:29.507 00000110 0c 28 00 cd 65 f2 20 8c fb c2 c1 86 62 18 35 25 .(..e. .....b.5% 00:22:29.507 00000120 4d f0 b5 29 bb a2 c6 90 52 2e 22 bb 0f 12 97 9e M..)....R."..... 00:22:29.507 00000130 d5 d3 8c d8 60 e7 7a 1e a9 dd 2a ef 3d e5 c2 52 ....`.z...*.=..R 00:22:29.508 00000140 4a b8 d1 f4 5e 88 cb 36 67 85 b2 4d e0 a1 d0 df J...^..6g..M.... 00:22:29.508 00000150 b3 76 75 41 4a b0 93 03 2c 50 5f f6 e9 80 88 33 .vuAJ...,P_....3 00:22:29.508 00000160 66 21 c7 1b 74 89 53 6d 4e b0 56 25 ac 9b c5 58 f!..t.SmN.V%...X 00:22:29.508 00000170 1f 2d 99 13 cb 9c e5 c6 ee f2 99 82 39 a3 a6 ac .-..........9... 00:22:29.508 00000180 f8 cc 1a a5 05 4e 4f 70 56 39 31 08 d4 bc 5d cb .....NOpV91...]. 00:22:29.508 00000190 61 da f6 e5 ea e4 5c 14 d4 cb 70 0f bb 6c 90 de a.....\...p..l.. 00:22:29.508 000001a0 4e 0b 1c a0 53 a9 df 07 83 80 52 73 66 f5 c0 df N...S.....Rsf... 00:22:29.508 000001b0 c0 b2 18 e7 b2 7f a0 9e e2 40 51 87 15 4a 30 69 .........@Q..J0i 00:22:29.508 000001c0 9e 52 29 b7 79 be 5c 39 fc fe 58 02 1c d2 a9 43 .R).y.\9..X....C 00:22:29.508 000001d0 7c 03 07 3c 6a a9 dc 24 b4 d6 6c 10 44 ea 66 91 |.....O|VG...W.H.. 00:22:29.508 00000200 9b 01 04 f8 e5 cf 73 04 84 60 ae ad 0a 79 93 1b ......s..`...y.. 00:22:29.508 00000210 f5 ad ac 5c 7d ee 00 1e ae d2 b0 54 7c 92 a3 3a ...\}......T|..: 00:22:29.508 00000220 cc f9 87 da e3 09 cc ac 98 ec 36 e1 1e 32 c5 19 ..........6..2.. 00:22:29.508 00000230 7f a5 62 dc bb e7 1f 02 89 25 50 ce fe b7 39 2a ..b......%P...9* 00:22:29.508 00000240 c7 e0 94 0a c0 61 62 0e 43 b4 ae 96 b1 7e a3 b8 .....ab.C....~.. 00:22:29.508 00000250 63 54 35 f7 31 bc 1a f3 48 f2 90 50 8b 3f 01 6e cT5.1...H..P.?.n 00:22:29.508 00000260 ee 9d 9c 3b b8 20 66 40 06 bc 3e 52 29 31 b0 3d ...;. f@..>R)1.= 00:22:29.508 00000270 06 f0 aa ba da c2 29 08 b0 8f f6 4c ea af 34 b1 ......)....L..4. 00:22:29.508 00000280 ca 6e 70 33 35 fe 69 f0 c3 26 42 fc e8 8f 94 a0 .np35.i..&B..... 00:22:29.508 00000290 75 0b 7b 88 81 b5 b1 24 d8 55 e6 4f 08 0d cb 87 u.{....$.U.O.... 00:22:29.508 000002a0 e6 fa 1e da c0 89 35 8a 49 5d 1e af 0e 08 12 7f ......5.I]...... 00:22:29.508 000002b0 dd b1 43 ce 53 9b 8c 2e 92 7f 2a ea 74 e8 20 ba ..C.S.....*.t. . 00:22:29.508 000002c0 1a 00 fb 4d c6 07 7c 6e 69 96 49 a3 6c 9b 69 03 ...M..|ni.I.l.i. 00:22:29.508 000002d0 25 86 e2 f5 59 83 85 b4 22 75 13 70 25 16 99 3e %...Y..."u.p%..> 00:22:29.508 000002e0 d7 ea f5 6b 1e 31 8a 20 1e ee f2 91 66 73 f5 41 ...k.1. ....fs.A 00:22:29.508 000002f0 15 92 6f 7f 1d 9f cd 79 6a 0f 9d 44 2a 29 8f a4 ..o....yj..D*).. 00:22:29.508 [2024-09-27 13:27:16.190567] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key1, hash=2, dhgroup=4, seq=3775755256, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.508 [2024-09-27 13:27:16.190879] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.508 [2024-09-27 13:27:16.242789] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.508 [2024-09-27 13:27:16.243339] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.508 [2024-09-27 13:27:16.243543] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.508 [2024-09-27 13:27:16.243817] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.508 [2024-09-27 13:27:16.369468] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.508 [2024-09-27 13:27:16.369798] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.508 [2024-09-27 13:27:16.369909] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.508 [2024-09-27 13:27:16.370035] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.508 [2024-09-27 13:27:16.370261] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.508 ctrlr pubkey: 00:22:29.508 00000000 26 1b 35 f8 78 9d dc 7f 29 ca d5 27 81 ce 99 f9 &.5.x...)..'.... 00:22:29.508 00000010 53 2f 05 79 b9 58 fe 89 4c e1 c6 ae 52 04 b6 3f S/.y.X..L...R..? 00:22:29.508 00000020 d4 50 20 5e 0a 84 15 ba 54 e8 ac 26 0b bc 94 37 .P ^....T..&...7 00:22:29.508 00000030 dc 37 50 4d 2c 76 84 86 2b 2a c3 c6 4a 0f 65 2a .7PM,v..+*..J.e* 00:22:29.508 00000040 06 96 a0 65 6c c0 37 95 cd 2a bc d5 1a a3 7c 22 ...el.7..*....|" 00:22:29.508 00000050 a9 76 5a e0 b0 1c 60 e2 77 88 77 dc 22 29 ab 8e .vZ...`.w.w.").. 00:22:29.508 00000060 69 e9 7a 26 3e 28 54 19 ed 99 b2 0a bc e8 b5 d8 i.z&>(T......... 00:22:29.508 00000070 1b ef 12 3e b7 d2 3b 0d df b9 11 02 58 a4 22 2f ...>..;.....X."/ 00:22:29.508 00000080 96 9f 90 9e b4 4f 2e ad 37 7f 3c b5 52 21 f5 19 .....O..7.<.R!.. 00:22:29.508 00000090 65 09 30 6f 3c cb 8a 74 d1 87 90 5b eb 9c cc 0f e.0o<..t...[.... 00:22:29.508 000000a0 11 b9 d8 0c 00 25 3c f8 0d e3 f6 d4 37 9d 45 39 .....%<.....7.E9 00:22:29.508 000000b0 16 5d af f6 13 ee 12 95 71 34 88 77 d2 eb 76 1e .]......q4.w..v. 00:22:29.508 000000c0 fb bb d0 f1 af 83 98 fa a2 83 38 a5 d3 2e d1 1b ..........8..... 00:22:29.508 000000d0 27 b1 2d 7c 8f 93 fb 2e 6e cc 35 90 67 82 bb f1 '.-|....n.5.g... 00:22:29.508 000000e0 08 36 9c af c3 96 26 89 e8 03 9a 0c 7c 3b 32 e3 .6....&.....|;2. 00:22:29.508 000000f0 bf aa cc 39 03 07 2f ec 2e 4b f4 3a 44 f1 95 ff ...9../..K.:D... 00:22:29.508 00000100 81 96 e9 83 2e 3c 8f 6a 50 23 d5 a3 bc 5b a9 50 .....<.jP#...[.P 00:22:29.508 00000110 8b ac 07 ca a5 0a e2 d1 4f e8 8c ec b8 ea a4 06 ........O....... 00:22:29.508 00000120 c5 b5 5b b3 da 77 ed 72 ba 25 b3 b8 d2 5e 77 6a ..[..w.r.%...^wj 00:22:29.508 00000130 a7 9d 25 4f 71 7c ed a4 65 d3 98 04 cd 75 7f a6 ..%Oq|..e....u.. 00:22:29.508 00000140 11 3c af a9 c5 d0 21 cb 94 be 9d cd ac e8 d2 17 .<....!......... 00:22:29.508 00000150 fd 40 20 4e 11 a4 bf a5 a5 a1 9d 30 99 ef 86 e0 .@ N.......0.... 00:22:29.508 00000160 47 29 9d 2a 89 a7 75 e8 45 48 17 0b 21 85 8f bf G).*..u.EH..!... 00:22:29.508 00000170 57 65 3f 4e 98 19 46 fc 07 1e 27 ba f2 af 00 3e We?N..F...'....> 00:22:29.508 00000180 a9 12 22 0f 0d 88 ea 77 ae dd c8 9d 87 25 7e 14 .."....w.....%~. 00:22:29.508 00000190 85 0e 65 54 6e 7b 4c 66 4f d2 0b 0c 6d 55 c5 dc ..eTn{LfO...mU.. 00:22:29.508 000001a0 e8 bf a0 75 77 1f 7e 89 e6 b0 38 d8 23 73 16 e1 ...uw.~...8.#s.. 00:22:29.508 000001b0 f6 f5 45 a3 3d b5 f6 38 c4 fa c9 e9 5a e5 d6 42 ..E.=..8....Z..B 00:22:29.508 000001c0 9a 79 df 9e 83 b0 3d ee 1f b4 ba 42 b4 19 80 8b .y....=....B.... 00:22:29.508 000001d0 de 74 eb 02 a8 88 64 04 fa 60 f9 dc a5 63 bd 7e .t....d..`...c.~ 00:22:29.508 000001e0 51 b7 98 45 be 8e d7 c4 d6 64 47 9c c0 1e 3e 32 Q..E.....dG...>2 00:22:29.508 000001f0 fe 82 12 11 9f 5b b1 cd 5b eb 34 f5 e3 a9 3b 2b .....[..[.4...;+ 00:22:29.508 00000200 42 b3 85 a5 a5 78 1f 92 4f 66 7a 89 08 14 de 22 B....x..Ofz...." 00:22:29.508 00000210 96 9c ef f6 cb 29 e4 d0 e5 2c 87 27 b0 32 75 dc .....)...,.'.2u. 00:22:29.508 00000220 77 93 e6 3d 3c ca b1 8b 27 27 50 18 2d 2b 3d f9 w..=<...''P.-+=. 00:22:29.508 00000230 ad 9b f8 3e ae 2b da 36 fa 93 b5 51 e4 56 9d 70 ...>.+.6...Q.V.p 00:22:29.508 00000240 ab a6 05 1d a3 23 d4 5b e2 ec d3 5e e4 e0 28 9c .....#.[...^..(. 00:22:29.508 00000250 bb 8e 87 8e 3d 2e f3 39 e0 66 b5 28 e8 ee 6b 5c ....=..9.f.(..k\ 00:22:29.508 00000260 5c 95 19 0c 29 f2 52 8c bd 5a c2 82 85 27 69 92 \...).R..Z...'i. 00:22:29.508 00000270 9a f3 d7 21 7a e1 48 08 8f f8 c7 48 e1 29 f5 a7 ...!z.H....H.).. 00:22:29.508 00000280 15 46 2c 57 96 b4 9e a4 5f 58 28 cb 77 c0 75 02 .F,W...._X(.w.u. 00:22:29.508 00000290 78 33 ff 8a 79 06 4b c3 15 ab af 3f 73 44 d6 4a x3..y.K....?sD.J 00:22:29.508 000002a0 b0 f1 cc 41 33 b1 60 80 3c bd f4 5b 05 52 06 f7 ...A3.`.<..[.R.. 00:22:29.508 000002b0 6a 46 c9 76 e9 f7 47 6b 73 32 c2 a2 65 8b f5 cd jF.v..Gks2..e... 00:22:29.508 000002c0 41 d4 4c 27 3c 9f 45 49 de df 53 4f 0d a0 6c a6 A.L'<.EI..SO..l. 00:22:29.508 000002d0 95 86 28 42 d9 cf 06 f7 86 66 da 25 61 88 f5 55 ..(B.....f.%a..U 00:22:29.508 000002e0 7a 25 51 02 00 38 db 6c 3f 07 08 d1 c5 ea 0b 35 z%Q..8.l?......5 00:22:29.508 000002f0 1e 40 a7 4d 04 be 2e a5 bc e1 e7 bf 11 89 a4 21 .@.M...........! 00:22:29.508 host pubkey: 00:22:29.508 00000000 75 00 05 06 41 c6 d7 94 77 89 c8 b7 86 c5 e9 6f u...A...w......o 00:22:29.508 00000010 80 55 35 31 0d 12 4e 0b 15 cf 47 df 02 20 95 c4 .U51..N...G.. .. 00:22:29.508 00000020 d2 a3 ce bf 64 e4 dc d8 7c 95 f7 bb 31 94 da 53 ....d...|...1..S 00:22:29.508 00000030 44 63 73 55 25 57 35 67 57 db bd f1 53 7a f4 5f DcsU%W5gW...Sz._ 00:22:29.508 00000040 9a df e3 48 9b 8d 22 6c cc 07 27 a3 60 2f 86 3f ...H.."l..'.`/.? 00:22:29.508 00000050 d3 82 01 fc df 90 ec 4d b4 66 a9 27 4e 4e ef f5 .......M.f.'NN.. 00:22:29.508 00000060 84 b3 fd 29 9b e0 a6 55 dc 4a f6 7b d3 ab d3 e9 ...)...U.J.{.... 00:22:29.508 00000070 26 2a 26 83 7f ac 7f d2 c3 02 5c 4b 74 76 69 8a &*&.......\Ktvi. 00:22:29.508 00000080 e9 21 ad e7 ed 1e 62 37 26 68 8f a9 60 6a bc e6 .!....b7&h..`j.. 00:22:29.508 00000090 79 b6 3a 89 6a 73 37 96 eb 76 d3 1d 11 b2 21 f1 y.:.js7..v....!. 00:22:29.508 000000a0 a1 d6 32 54 07 73 e5 2a a8 d0 42 84 8c b3 d4 c6 ..2T.s.*..B..... 00:22:29.508 000000b0 87 24 cc 8f 9e 78 db 57 0d 2b d0 1b fd 03 96 af .$...x.W.+...... 00:22:29.508 000000c0 4b c2 5f e8 93 8f 0f 25 09 29 d3 22 1b 48 75 22 K._....%.).".Hu" 00:22:29.508 000000d0 50 19 c8 51 a7 0c dc d3 eb d2 3d 14 c8 98 49 df P..Q......=...I. 00:22:29.508 000000e0 a5 2a 92 e0 23 51 f9 8b 8a eb 65 67 2e 02 bd 78 .*..#Q....eg...x 00:22:29.508 000000f0 96 1c 24 d5 35 b9 80 b7 c4 81 34 31 3b 18 9a 9b ..$.5.....41;... 00:22:29.508 00000100 81 64 71 bb d1 c1 76 88 5d 38 aa b0 8c 92 72 2d .dq...v.]8....r- 00:22:29.508 00000110 b6 c0 fc 00 48 1f 97 84 5b 3e 0d 75 c4 bd 75 8f ....H...[>.u..u. 00:22:29.508 00000120 0a e8 06 b4 e6 c1 09 25 c7 83 f4 07 ea 5a ce 45 .......%.....Z.E 00:22:29.508 00000130 55 30 ec 75 ad 16 d4 26 cc c4 91 59 42 d1 5d 05 U0.u...&...YB.]. 00:22:29.508 00000140 9c d9 d9 48 cb 4a 8c 13 85 02 1f d3 a3 a1 9f b0 ...H.J.......... 00:22:29.508 00000150 f8 a7 ba bd 83 d6 7f a8 62 8f 3c 3f 92 84 e0 25 ........b......~}h..F.... 00:22:29.509 000002f0 d2 9b 70 e8 eb 1f 22 71 e9 c3 40 d0 02 ed 8b bf ..p..."q..@..... 00:22:29.509 dh secret: 00:22:29.509 00000000 41 63 99 e1 74 1d e9 89 6f 84 a7 5e 34 d9 77 f4 Ac..t...o..^4.w. 00:22:29.509 00000010 61 3b f2 44 22 75 bd 27 d8 29 dd 49 a0 a6 e2 9c a;.D"u.'.).I.... 00:22:29.509 00000020 c7 85 21 a1 c1 be af d1 62 10 28 de ed 66 82 45 ..!.....b.(..f.E 00:22:29.509 00000030 bd a8 c2 98 5c 45 bc 10 1b 87 b1 be fe 5c f7 50 ....\E.......\.P 00:22:29.509 00000040 69 78 2c 04 3f 53 05 81 33 03 20 8a 11 1c 21 e7 ix,.?S..3. ...!. 00:22:29.509 00000050 32 ee a0 f9 2b 6b 0a 94 15 4e c4 f7 a1 53 95 68 2...+k...N...S.h 00:22:29.509 00000060 5b 1b ee 21 31 1c d1 6d b0 dd 75 da 93 bc 99 0b [..!1..m..u..... 00:22:29.509 00000070 91 48 58 a5 9e 44 53 d2 7a 1e a8 ea 64 a7 b2 e3 .HX..DS.z...d... 00:22:29.509 00000080 00 b6 5e 20 2d a4 cf 55 16 48 13 77 93 30 52 5a ..^ -..U.H.w.0RZ 00:22:29.509 00000090 65 f1 1b 64 40 d3 80 37 46 ec 52 9c 9f 71 85 43 e..d@..7F.R..q.C 00:22:29.509 000000a0 6c 15 0b 53 25 78 8c 6e 69 b2 a7 5f 57 43 db 67 l..S%x.ni.._WC.g 00:22:29.509 000000b0 2c b5 7b a6 7b 3a cd 62 36 9a 66 8a 4b 86 8d bf ,.{.{:.b6.f.K... 00:22:29.509 000000c0 bc 01 72 f0 fd e2 5e 26 aa cb 01 cf fb 74 36 3b ..r...^&.....t6; 00:22:29.509 000000d0 86 cd dc dc 9e f7 30 a3 e8 37 a8 2c 8e c2 e1 f7 ......0..7.,.... 00:22:29.509 000000e0 a7 57 35 32 16 2b 0a 27 d6 ad 07 d0 f5 91 e9 25 .W52.+.'.......% 00:22:29.509 000000f0 ba 90 62 23 7b 57 20 da 3d 90 51 f3 06 cd 08 65 ..b#{W .=.Q....e 00:22:29.509 00000100 42 61 5c c6 e8 df d6 44 d2 56 d0 d6 3f 24 51 9d Ba\....D.V..?$Q. 00:22:29.509 00000110 2c 63 4a 96 1b 95 32 d5 a4 60 42 f8 5d c0 88 35 ,cJ...2..`B.]..5 00:22:29.509 00000120 1c 34 4e 76 7d 22 3d 40 55 6c ae 37 16 21 e1 ad .4Nv}"=@Ul.7.!.. 00:22:29.509 00000130 63 29 f6 be 26 d0 cc 5b a2 82 7b 8e 92 3e 1f 5a c)..&..[..{..>.Z 00:22:29.509 00000140 94 65 bc e7 f6 4b 21 6d 40 e2 e0 4a f2 7d 32 de .e...K!m@..J.}2. 00:22:29.509 00000150 73 8e 97 87 03 db 57 bc 19 1f 0e 5c 9a 95 bd 57 s.....W....\...W 00:22:29.509 00000160 43 7b 07 83 89 43 c1 33 4b ed db 98 18 a5 06 92 C{...C.3K....... 00:22:29.509 00000170 2d 13 f9 1e 52 f3 09 f4 b7 9c f3 81 2d f6 9d a5 -...R.......-... 00:22:29.509 00000180 41 5a 16 17 6f 44 60 6b e5 c4 d3 0b 48 61 82 32 AZ..oD`k....Ha.2 00:22:29.509 00000190 8a 1f f6 9c 35 0c cb 70 5c 7e d6 a5 5c b2 2a d1 ....5..p\~..\.*. 00:22:29.509 000001a0 62 16 a5 da 0b b0 e7 cb 51 9f 9d dd 6e 94 01 da b.......Q...n... 00:22:29.509 000001b0 df ae d6 86 c9 a7 61 ca 7c 75 2b ff 9a fb 56 ba ......a.|u+...V. 00:22:29.509 000001c0 6c 33 41 65 4d 03 e1 8d 7e c2 54 3e 6d 9a ba 4a l3AeM...~.T>m..J 00:22:29.509 000001d0 d9 98 54 39 81 44 72 8b ed ed 49 7d 96 1f ae 88 ..T9.Dr...I}.... 00:22:29.509 000001e0 ec 78 7f 2d 53 51 f4 ba 10 34 21 65 b1 47 9b 1f .x.-SQ...4!e.G.. 00:22:29.509 000001f0 31 5d fc 82 d3 a2 18 2b 0a 1c 4e 97 cb 41 bc 2e 1].....+..N..A.. 00:22:29.509 00000200 8e bb 00 6b 2e 42 f9 13 88 14 58 94 8c 14 46 55 ...k.B....X...FU 00:22:29.509 00000210 29 91 95 7a 80 02 1e 39 16 a0 32 cd 0e 98 97 7a )..z...9..2....z 00:22:29.509 00000220 8d ac 4b 90 49 54 ab b5 84 69 05 cd a1 4b 4c e8 ..K.IT...i...KL. 00:22:29.509 00000230 34 25 0b 8f 42 64 86 37 f3 27 ec 3a 17 82 22 98 4%..Bd.7.'.:..". 00:22:29.509 00000240 95 ce 3e c3 0a 5e ce 24 5e af 39 ec 12 f6 0a 5a ..>..^.$^.9....Z 00:22:29.509 00000250 46 95 c1 9f 8d 4a 65 8b 7e 38 ac 32 65 29 77 33 F....Je.~8.2e)w3 00:22:29.509 00000260 e2 32 76 a1 52 9c e9 15 62 c6 64 d1 8d 3e c5 cc .2v.R...b.d..>.. 00:22:29.509 00000270 b7 b6 55 25 12 63 10 6f 83 e2 fd 44 33 4f ca 7c ..U%.c.o...D3O.| 00:22:29.509 00000280 76 76 ca 6d 4f 87 68 d3 3f ee 60 b9 49 25 d6 f2 vv.mO.h.?.`.I%.. 00:22:29.509 00000290 4a 15 49 01 7d 8e 80 71 30 30 10 5d 85 cd 4d e5 J.I.}..q00.]..M. 00:22:29.509 000002a0 7e 12 95 5c 5c 59 2a b8 a2 2e 21 75 89 3b 1e 75 ~..\\Y*...!u.;.u 00:22:29.509 000002b0 4f c0 62 8e 1f d9 93 15 00 a4 68 35 5d 54 19 c6 O.b.......h5]T.. 00:22:29.509 000002c0 9a b8 93 c3 07 ff 7c 5b 3a 20 92 11 bf 3c 60 23 ......|[: ...<`# 00:22:29.509 000002d0 1b 4a 48 17 41 7d d0 c5 6d 4a 9a 89 f3 52 bb 94 .JH.A}..mJ...R.. 00:22:29.509 000002e0 65 31 e1 bf 33 76 d9 b3 80 1a c4 48 d7 2b f2 7c e1..3v.....H.+.| 00:22:29.509 000002f0 66 2d 68 48 80 af 51 a9 a0 33 e5 10 f3 a0 d1 1d f-hH..Q..3...... 00:22:29.509 [2024-09-27 13:27:16.442852] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=2, dhgroup=4, seq=3775755257, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.509 [2024-09-27 13:27:16.443135] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.509 [2024-09-27 13:27:16.495254] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.509 [2024-09-27 13:27:16.495605] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.509 [2024-09-27 13:27:16.495827] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.509 [2024-09-27 13:27:16.496017] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.509 [2024-09-27 13:27:16.547805] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.509 [2024-09-27 13:27:16.548090] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.509 [2024-09-27 13:27:16.548281] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.509 [2024-09-27 13:27:16.548476] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.509 [2024-09-27 13:27:16.548712] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.509 ctrlr pubkey: 00:22:29.509 00000000 26 1b 35 f8 78 9d dc 7f 29 ca d5 27 81 ce 99 f9 &.5.x...)..'.... 00:22:29.509 00000010 53 2f 05 79 b9 58 fe 89 4c e1 c6 ae 52 04 b6 3f S/.y.X..L...R..? 00:22:29.509 00000020 d4 50 20 5e 0a 84 15 ba 54 e8 ac 26 0b bc 94 37 .P ^....T..&...7 00:22:29.509 00000030 dc 37 50 4d 2c 76 84 86 2b 2a c3 c6 4a 0f 65 2a .7PM,v..+*..J.e* 00:22:29.509 00000040 06 96 a0 65 6c c0 37 95 cd 2a bc d5 1a a3 7c 22 ...el.7..*....|" 00:22:29.509 00000050 a9 76 5a e0 b0 1c 60 e2 77 88 77 dc 22 29 ab 8e .vZ...`.w.w.").. 00:22:29.509 00000060 69 e9 7a 26 3e 28 54 19 ed 99 b2 0a bc e8 b5 d8 i.z&>(T......... 00:22:29.509 00000070 1b ef 12 3e b7 d2 3b 0d df b9 11 02 58 a4 22 2f ...>..;.....X."/ 00:22:29.509 00000080 96 9f 90 9e b4 4f 2e ad 37 7f 3c b5 52 21 f5 19 .....O..7.<.R!.. 00:22:29.509 00000090 65 09 30 6f 3c cb 8a 74 d1 87 90 5b eb 9c cc 0f e.0o<..t...[.... 00:22:29.509 000000a0 11 b9 d8 0c 00 25 3c f8 0d e3 f6 d4 37 9d 45 39 .....%<.....7.E9 00:22:29.509 000000b0 16 5d af f6 13 ee 12 95 71 34 88 77 d2 eb 76 1e .]......q4.w..v. 00:22:29.509 000000c0 fb bb d0 f1 af 83 98 fa a2 83 38 a5 d3 2e d1 1b ..........8..... 00:22:29.509 000000d0 27 b1 2d 7c 8f 93 fb 2e 6e cc 35 90 67 82 bb f1 '.-|....n.5.g... 00:22:29.509 000000e0 08 36 9c af c3 96 26 89 e8 03 9a 0c 7c 3b 32 e3 .6....&.....|;2. 00:22:29.509 000000f0 bf aa cc 39 03 07 2f ec 2e 4b f4 3a 44 f1 95 ff ...9../..K.:D... 00:22:29.509 00000100 81 96 e9 83 2e 3c 8f 6a 50 23 d5 a3 bc 5b a9 50 .....<.jP#...[.P 00:22:29.509 00000110 8b ac 07 ca a5 0a e2 d1 4f e8 8c ec b8 ea a4 06 ........O....... 00:22:29.509 00000120 c5 b5 5b b3 da 77 ed 72 ba 25 b3 b8 d2 5e 77 6a ..[..w.r.%...^wj 00:22:29.509 00000130 a7 9d 25 4f 71 7c ed a4 65 d3 98 04 cd 75 7f a6 ..%Oq|..e....u.. 00:22:29.509 00000140 11 3c af a9 c5 d0 21 cb 94 be 9d cd ac e8 d2 17 .<....!......... 00:22:29.509 00000150 fd 40 20 4e 11 a4 bf a5 a5 a1 9d 30 99 ef 86 e0 .@ N.......0.... 00:22:29.509 00000160 47 29 9d 2a 89 a7 75 e8 45 48 17 0b 21 85 8f bf G).*..u.EH..!... 00:22:29.509 00000170 57 65 3f 4e 98 19 46 fc 07 1e 27 ba f2 af 00 3e We?N..F...'....> 00:22:29.509 00000180 a9 12 22 0f 0d 88 ea 77 ae dd c8 9d 87 25 7e 14 .."....w.....%~. 00:22:29.509 00000190 85 0e 65 54 6e 7b 4c 66 4f d2 0b 0c 6d 55 c5 dc ..eTn{LfO...mU.. 00:22:29.509 000001a0 e8 bf a0 75 77 1f 7e 89 e6 b0 38 d8 23 73 16 e1 ...uw.~...8.#s.. 00:22:29.509 000001b0 f6 f5 45 a3 3d b5 f6 38 c4 fa c9 e9 5a e5 d6 42 ..E.=..8....Z..B 00:22:29.509 000001c0 9a 79 df 9e 83 b0 3d ee 1f b4 ba 42 b4 19 80 8b .y....=....B.... 00:22:29.509 000001d0 de 74 eb 02 a8 88 64 04 fa 60 f9 dc a5 63 bd 7e .t....d..`...c.~ 00:22:29.509 000001e0 51 b7 98 45 be 8e d7 c4 d6 64 47 9c c0 1e 3e 32 Q..E.....dG...>2 00:22:29.509 000001f0 fe 82 12 11 9f 5b b1 cd 5b eb 34 f5 e3 a9 3b 2b .....[..[.4...;+ 00:22:29.509 00000200 42 b3 85 a5 a5 78 1f 92 4f 66 7a 89 08 14 de 22 B....x..Ofz...." 00:22:29.509 00000210 96 9c ef f6 cb 29 e4 d0 e5 2c 87 27 b0 32 75 dc .....)...,.'.2u. 00:22:29.509 00000220 77 93 e6 3d 3c ca b1 8b 27 27 50 18 2d 2b 3d f9 w..=<...''P.-+=. 00:22:29.509 00000230 ad 9b f8 3e ae 2b da 36 fa 93 b5 51 e4 56 9d 70 ...>.+.6...Q.V.p 00:22:29.509 00000240 ab a6 05 1d a3 23 d4 5b e2 ec d3 5e e4 e0 28 9c .....#.[...^..(. 00:22:29.509 00000250 bb 8e 87 8e 3d 2e f3 39 e0 66 b5 28 e8 ee 6b 5c ....=..9.f.(..k\ 00:22:29.509 00000260 5c 95 19 0c 29 f2 52 8c bd 5a c2 82 85 27 69 92 \...).R..Z...'i. 00:22:29.509 00000270 9a f3 d7 21 7a e1 48 08 8f f8 c7 48 e1 29 f5 a7 ...!z.H....H.).. 00:22:29.509 00000280 15 46 2c 57 96 b4 9e a4 5f 58 28 cb 77 c0 75 02 .F,W...._X(.w.u. 00:22:29.509 00000290 78 33 ff 8a 79 06 4b c3 15 ab af 3f 73 44 d6 4a x3..y.K....?sD.J 00:22:29.509 000002a0 b0 f1 cc 41 33 b1 60 80 3c bd f4 5b 05 52 06 f7 ...A3.`.<..[.R.. 00:22:29.509 000002b0 6a 46 c9 76 e9 f7 47 6b 73 32 c2 a2 65 8b f5 cd jF.v..Gks2..e... 00:22:29.509 000002c0 41 d4 4c 27 3c 9f 45 49 de df 53 4f 0d a0 6c a6 A.L'<.EI..SO..l. 00:22:29.509 000002d0 95 86 28 42 d9 cf 06 f7 86 66 da 25 61 88 f5 55 ..(B.....f.%a..U 00:22:29.509 000002e0 7a 25 51 02 00 38 db 6c 3f 07 08 d1 c5 ea 0b 35 z%Q..8.l?......5 00:22:29.509 000002f0 1e 40 a7 4d 04 be 2e a5 bc e1 e7 bf 11 89 a4 21 .@.M...........! 00:22:29.509 host pubkey: 00:22:29.509 00000000 7f 95 5e 8b f6 2a f2 55 b9 c8 05 77 a3 1a 1d b8 ..^..*.U...w.... 00:22:29.509 00000010 67 92 9c 46 d8 c2 5b 2f 9e 8e a8 06 ea b0 e2 e8 g..F..[/........ 00:22:29.509 00000020 e8 68 1f 8d e9 b2 9b 5a e4 81 92 cc 2b 4b d2 6e .h.....Z....+K.n 00:22:29.509 00000030 f4 01 19 1d 7f 12 ce bd ce ca 31 09 fc 6d ba 5b ..........1..m.[ 00:22:29.509 00000040 dd 8d b0 b8 14 00 0e 8d aa b1 eb 98 ef f6 b3 5e ...............^ 00:22:29.509 00000050 e5 99 98 17 d3 87 71 f0 6f 3d 5a a3 8a c9 d1 42 ......q.o=Z....B 00:22:29.509 00000060 2e 61 48 7d 04 69 41 37 03 d0 a0 76 21 6b 97 1c .aH}.iA7...v!k.. 00:22:29.509 00000070 3c 79 a4 54 66 92 a6 6f a6 55 f7 86 2b ce 7b 2b .@% 00:22:29.509 000000a0 aa 79 a8 b1 9e f6 56 2b d5 e7 5e 5f 5b 63 aa 83 .y....V+..^_[c.. 00:22:29.509 000000b0 a8 b7 53 01 ff ab 30 33 18 65 4b db 00 1e f6 d1 ..S...03.eK..... 00:22:29.509 000000c0 bb 94 a0 97 7a 4a 27 a0 40 43 47 2b 71 4d da fb ....zJ'.@CG+qM.. 00:22:29.509 000000d0 7b ed aa db d7 e6 16 75 0a 9b 67 4a 41 78 1e a3 {......u..gJAx.. 00:22:29.509 000000e0 dd 32 a6 01 56 e6 41 5b d8 d0 e1 ff ff b3 14 d4 .2..V.A[........ 00:22:29.509 000000f0 6a 2b 29 bb 3d f8 a8 e0 62 ff 04 37 d1 38 ac 0b j+).=...b..7.8.. 00:22:29.509 00000100 b5 c2 86 03 fd 0f 3f 6c d4 3d d2 66 38 1f fc 41 ......?l.=.f8..A 00:22:29.509 00000110 fe c9 ab 7b 90 36 5f ef cf 95 6d 44 4d 8a 16 ac ...{.6_...mDM... 00:22:29.510 00000120 12 bb fc 27 12 ec be 2c 53 f8 6d 32 c7 d2 52 fb ...'...,S.m2..R. 00:22:29.510 00000130 b7 84 d9 43 5e 8f 19 d8 0e 41 9e 8b c4 6a 1b 25 ...C^....A...j.% 00:22:29.510 00000140 ef 2e 6f fe e5 56 b7 98 33 9c e8 47 0d d1 2f 3b ..o..V..3..G../; 00:22:29.510 00000150 14 b8 65 36 28 c3 46 9f dc 31 4c 27 9a 27 b7 6b ..e6(.F..1L'.'.k 00:22:29.510 00000160 3c 14 6a 07 ed db d0 cb 12 a4 aa a9 c8 7e f5 03 <.j..........~.. 00:22:29.510 00000170 3d 83 fd 66 f6 c2 d4 58 48 55 c9 40 14 1a 50 cb =..f...XHU.@..P. 00:22:29.510 00000180 d9 5a a6 9b 02 71 7c 00 6e bd 9b db 11 f0 8c df .Z...q|.n....... 00:22:29.510 00000190 e8 e4 ac 9c 0b fc e1 94 e1 a3 6d 45 3e be 3b d8 ..........mE>.;. 00:22:29.510 000001a0 87 cc 5c 01 ac 40 b7 83 d1 e0 90 ff 4f 82 46 94 ..\..@......O.F. 00:22:29.510 000001b0 9d 53 61 95 b3 bb 8a a5 87 51 17 63 8d 1e 26 9a .Sa......Q.c..&. 00:22:29.510 000001c0 3b 5a 13 eb e0 76 da 5e f2 9a 2d df 9a 9d 8e 8e ;Z...v.^..-..... 00:22:29.510 000001d0 76 8d 2b 21 8e bf a0 20 a8 c0 83 15 6f 82 3b 96 v.+!... ....o.;. 00:22:29.510 000001e0 9b 94 8f f6 16 90 b3 f5 f3 3b 68 a8 07 89 69 fa .........;h...i. 00:22:29.510 000001f0 d2 d8 35 42 63 60 5f 5d bb c3 21 32 f2 bc 3b e4 ..5Bc`_]..!2..;. 00:22:29.510 00000200 15 56 74 fd cd f8 c5 06 eb f2 3a f7 2c 8e dc a8 .Vt.......:.,... 00:22:29.510 00000210 26 7c 1f a0 15 26 79 53 01 ad b0 89 51 b2 f1 0a &|...&yS....Q... 00:22:29.510 00000220 1e 40 7b 92 03 eb d3 07 11 9f 3f 83 cb 6b 56 ad .@{.......?..kV. 00:22:29.510 00000230 0e c3 4c 4a 17 6c 27 cf 51 2e bb 76 af 71 ce de ..LJ.l'.Q..v.q.. 00:22:29.510 00000240 2d 59 1e e2 04 f1 50 64 d7 e3 e0 4f 46 0a 37 12 -Y....Pd...OF.7. 00:22:29.510 00000250 d8 f2 c0 40 de d2 1d d4 93 5f 6e 92 f5 0a 4d 0b ...@....._n...M. 00:22:29.510 00000260 ee ba 78 ac 35 3a 60 64 8d ec 64 90 66 13 3b 56 ..x.5:`d..d.f.;V 00:22:29.510 00000270 2e 10 69 db 3f aa 83 d8 50 da 4b 0a b6 14 4c 3d ..i.?...P.K...L= 00:22:29.510 00000280 66 36 cb bf 6d 98 9e c1 aa e8 a7 15 e8 f3 42 9a f6..m.........B. 00:22:29.510 00000290 c4 52 4b bf 10 2d 85 33 88 44 78 1e 8e 96 0d ca .RK..-.3.Dx..... 00:22:29.510 000002a0 05 7e 7c c1 b9 85 47 77 84 c4 4a 29 92 6f f0 2e .~|...Gw..J).o.. 00:22:29.510 000002b0 fb 1d 42 4b 01 2b c6 cf 71 fe eb 94 f0 5b 3d ad ..BK.+..q....[=. 00:22:29.510 000002c0 4c 61 f3 1a 6c 2a 6d ec 1f f7 81 47 ca 05 18 e7 La..l*m....G.... 00:22:29.510 000002d0 a1 27 2d 1e 9e 61 34 33 8b 4b 9f c3 73 d8 61 21 .'-..a43.K..s.a! 00:22:29.510 000002e0 da 69 ba 1e 2a 85 38 2d 39 ee 4f 9f 57 a1 60 d6 .i..*.8-9.O.W.`. 00:22:29.510 000002f0 15 9b 1d c4 f9 b3 56 3b fc 08 da fb ab 7d 0a 63 ......V;.....}.c 00:22:29.510 dh secret: 00:22:29.510 00000000 9a 55 01 f2 31 4e ef c2 d1 a0 6f 8d 90 cb 94 f6 .U..1N....o..... 00:22:29.510 00000010 6e 5b b5 94 ae 05 56 d4 c1 07 5b 8e fc 6e 98 48 n[....V...[..n.H 00:22:29.510 00000020 f4 10 fd 84 d7 2a 0b 53 1a cd ae f1 bd 28 93 c3 .....*.S.....(.. 00:22:29.510 00000030 ac ab f3 3c 72 92 5b 3c 76 4c 4a b0 90 79 a6 bd ....&X.. 00:22:29.510 000000e0 ad ae a3 cf 01 b0 a1 7e 61 e7 a6 59 35 85 6d ee .......~a..Y5.m. 00:22:29.510 000000f0 3b 52 98 84 04 c5 6d 14 e0 11 4e e5 a5 57 9a 81 ;R....m...N..W.. 00:22:29.510 00000100 db 6d fc 20 bb 7c 01 40 c1 15 6a 64 32 5d 2b 97 .m. .|.@..jd2]+. 00:22:29.510 00000110 36 41 f1 d8 18 e1 88 6c 12 42 8f 5f 3d 0f a6 62 6A.....l.B._=..b 00:22:29.510 00000120 2a 46 b8 df 47 62 40 4e 08 34 45 00 a0 65 ca 5b *F..Gb@N.4E..e.[ 00:22:29.510 00000130 4b 4b c6 53 f4 a8 dd f8 10 40 f4 dc b0 49 0f 8c KK.S.....@...I.. 00:22:29.510 00000140 d0 e9 fd 6f 0f 3f ae c5 f7 cd 6d b8 26 13 fd 19 ...o.?....m.&... 00:22:29.510 00000150 cb 23 0f 66 ea 47 34 4c a5 03 5d f9 cb 13 c9 c2 .#.f.G4L..]..... 00:22:29.510 00000160 73 89 ae 36 18 0e d5 d1 ff 5d 8d 8d da bb de aa s..6.....]...... 00:22:29.510 00000170 44 f5 33 c0 7e ff 42 79 bd 48 14 8d 3c d4 16 62 D.3.~.By.H..<..b 00:22:29.510 00000180 0a 85 52 bb 4d f8 f1 d8 dc e6 29 8a 37 d5 9d bd ..R.M.....).7... 00:22:29.510 00000190 b6 38 25 aa d9 4c 67 fc a9 7f a7 2a 16 79 cb 46 .8%..Lg....*.y.F 00:22:29.510 000001a0 65 4c 4a 83 fb 33 17 16 1b 12 13 ce 48 17 d3 ca eLJ..3......H... 00:22:29.510 000001b0 ee 54 8f 52 99 b9 e6 b0 79 df 90 03 53 ed 2a 79 .T.R....y...S.*y 00:22:29.510 000001c0 dc 7a e4 cb 2d 4a 25 5e 5a 7b 84 fc 7b b5 79 50 .z..-J%^Z{..{.yP 00:22:29.510 000001d0 58 8f 0e 52 a9 4d cd a7 12 ab 08 cf e3 c7 d8 fa X..R.M.......... 00:22:29.510 000001e0 83 ae f1 8b a5 75 c3 66 8d 5c 88 26 89 0a ff d3 .....u.f.\.&.... 00:22:29.510 000001f0 b4 00 13 cd fd 4c 4d d8 e1 4c d8 d4 96 e1 e7 fa .....LM..L...... 00:22:29.510 00000200 91 6a 1f 4b 03 d4 05 8e 29 4d 3d b2 af 8c 52 bb .j.K....)M=...R. 00:22:29.510 00000210 85 c1 d0 b1 23 36 08 d0 0f 54 ae 13 af ca aa e4 ....#6...T...... 00:22:29.510 00000220 da ef b7 5b 8c a7 26 fa 64 cc 70 c5 9a 65 30 db ...[..&.d.p..e0. 00:22:29.510 00000230 7e a6 76 4c fd 0e a7 06 7a 9e 73 4f 9f 4c 45 92 ~.vL....z.sO.LE. 00:22:29.510 00000240 7d ca a8 b2 38 26 46 88 bd f7 a3 18 98 8a d4 45 }...8&F........E 00:22:29.510 00000250 31 a4 15 cc 0d c5 ca ae b5 b8 d6 da 76 12 12 d3 1...........v... 00:22:29.510 00000260 26 f9 85 f1 9a 9a 13 71 25 2f bc bb aa 5c 66 0b &......q%/...\f. 00:22:29.510 00000270 03 d8 87 ff 30 85 34 7b 42 aa b0 1a fe 2d dc 52 ....0.4{B....-.R 00:22:29.510 00000280 79 2d 82 91 88 df 59 01 91 ec 8d f0 c9 7e fc c8 y-....Y......~.. 00:22:29.510 00000290 0c 93 b1 ca f6 45 4a c6 12 27 c2 0e 29 1a 90 56 .....EJ..'..)..V 00:22:29.510 000002a0 85 7e d6 f9 f9 3a bd b4 a3 d6 84 3c 60 9d ec 12 .~...:.....<`... 00:22:29.510 000002b0 cb 59 14 58 a7 06 7d c3 05 ba 5c 07 23 a8 b3 75 .Y.X..}...\.#..u 00:22:29.510 000002c0 17 75 89 65 2f 0d be 7b 58 41 51 93 33 f4 29 f9 .u.e/..{XAQ.3.). 00:22:29.510 000002d0 47 1f 32 49 89 cf 6e 3e 2b f2 ed 3f 7f 90 69 db G.2I..n>+..?..i. 00:22:29.510 000002e0 ac e1 04 f7 6a 33 c8 17 24 79 f0 b6 f1 70 f9 2d ....j3..$y...p.- 00:22:29.510 000002f0 35 64 d1 f3 d0 c9 40 24 06 d8 f1 b1 f5 d2 1d 87 5d....@$........ 00:22:29.510 [2024-09-27 13:27:16.618850] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key2, hash=2, dhgroup=4, seq=3775755258, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.510 [2024-09-27 13:27:16.619181] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.510 [2024-09-27 13:27:16.670623] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.510 [2024-09-27 13:27:16.671130] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.510 [2024-09-27 13:27:16.671349] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.510 [2024-09-27 13:27:16.671564] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.510 [2024-09-27 13:27:16.801086] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.510 [2024-09-27 13:27:16.801374] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.510 [2024-09-27 13:27:16.801598] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.510 [2024-09-27 13:27:16.801805] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.510 [2024-09-27 13:27:16.802095] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.510 ctrlr pubkey: 00:22:29.510 00000000 9d 41 99 b8 61 64 75 52 6e 8d 04 8d 4b 12 08 24 .A..aduRn...K..$ 00:22:29.510 00000010 8d 62 de 80 02 47 e4 94 99 96 97 13 f2 f4 82 10 .b...G.......... 00:22:29.510 00000020 d6 7a 6b a5 22 74 56 9d 60 d1 e0 15 5d f1 d2 b7 .zk."tV.`...]... 00:22:29.510 00000030 e6 75 4a 30 fd 48 ff 84 24 7a ca 97 55 ef 0b 9d .uJ0.H..$z..U... 00:22:29.510 00000040 4f 16 a0 55 28 b9 fc 31 39 2f 39 e4 68 8c 99 86 O..U(..19/9.h... 00:22:29.510 00000050 9d de f2 a7 94 be f7 44 69 de f3 c0 f9 06 78 86 .......Di.....x. 00:22:29.510 00000060 a8 93 11 6f ac c7 f1 de 5c f4 cc e0 d1 84 3f 14 ...o....\.....?. 00:22:29.510 00000070 30 e8 5b 07 26 4e de 72 45 0f 92 6d f7 e2 9d 0f 0.[.&N.rE..m.... 00:22:29.510 00000080 5d 5b ff ec cf f4 d8 8d 54 f8 fc 3f 2c b5 45 e1 ][......T..?,.E. 00:22:29.510 00000090 d6 8a 95 17 39 7c 79 85 76 7e 31 5d 97 0f fe 03 ....9|y.v~1].... 00:22:29.510 000000a0 89 60 b6 e5 36 41 a6 94 e9 51 d4 d5 46 04 c7 3d .`..6A...Q..F..= 00:22:29.510 000000b0 c4 f0 b6 2f 13 4a bb de 29 ca ae 1d d1 fa de f6 .../.J..)....... 00:22:29.510 000000c0 41 e8 7d 39 b0 03 ce f6 82 83 87 88 05 e6 2f b7 A.}9........../. 00:22:29.510 000000d0 1a 69 7e 1d 1a e1 10 51 4e 3e 70 f9 4b b0 de 66 .i~....QN>p.K..f 00:22:29.510 000000e0 55 f6 81 ea b9 42 26 dc 42 70 c6 39 82 c2 e3 46 U....B&.Bp.9...F 00:22:29.510 000000f0 a2 9a 43 cb a1 17 95 97 16 1e 42 65 c5 4d 11 66 ..C.......Be.M.f 00:22:29.510 00000100 08 9a 20 3b 5f fb 40 4f 06 48 d9 ce 58 e4 8c bd .. ;_.@O.H..X... 00:22:29.510 00000110 32 86 65 cf 76 bd be f7 b0 18 15 ea f7 a5 6e 21 2.e.v.........n! 00:22:29.510 00000120 f3 27 c7 d0 47 34 59 a9 9c 16 de 2d 4a 6e 9f e5 .'..G4Y....-Jn.. 00:22:29.510 00000130 40 f5 3a 3e 9d b1 10 54 88 9b 63 17 c1 67 a1 a1 @.:>...T..c..g.. 00:22:29.510 00000140 9e b0 2a 11 41 38 ff f7 0c fb a1 bd c1 7c 9f e0 ..*.A8.......|.. 00:22:29.510 00000150 67 14 93 29 3c 6d 27 55 8f 63 94 af 57 10 72 3f g..).kZ..8... 00:22:29.510 000001a0 c6 52 47 8b 9c e4 8c 13 1e 67 ee 48 43 97 92 5c .RG......g.HC..\ 00:22:29.510 000001b0 07 9d 0a b0 29 71 0b a0 9a ca c3 55 95 4f 8f 6c ....)q.....U.O.l 00:22:29.510 000001c0 c3 d8 e6 50 68 04 ea 21 04 0e b5 73 3d 01 e7 4a ...Ph..!...s=..J 00:22:29.510 000001d0 7f 26 1d 48 f0 66 cf c8 01 f4 b8 b9 f6 bc 1e a5 .&.H.f.......... 00:22:29.510 000001e0 c8 ab 75 38 a6 59 f3 3e d3 b1 06 18 c2 14 58 cb ..u8.Y.>......X. 00:22:29.510 000001f0 5f 18 73 d1 74 25 61 67 6b b6 42 70 2f 6a ab a8 _.s.t%agk.Bp/j.. 00:22:29.510 00000200 6c d7 71 14 52 ef f2 d6 1b b3 ed 84 2a 73 a7 28 l.q.R.......*s.( 00:22:29.510 00000210 8c 2b 9f 63 4e 05 31 a2 fb 47 c4 a9 15 18 c2 ca .+.cN.1..G...... 00:22:29.510 00000220 ff 44 bf 11 00 12 ad 65 d6 58 f0 ac 10 9b f6 5d .D.....e.X.....] 00:22:29.510 00000230 0a 3e f6 ce 1f 0f cf ac 04 b2 7b 6c 39 af 79 74 .>........{l9.yt 00:22:29.510 00000240 6a 04 27 c9 27 00 06 44 5e f3 6e 84 67 0a 88 15 j.'.'..D^.n.g... 00:22:29.510 00000250 0d 0a bd fe 3f a4 17 00 93 6e 25 04 a3 6d fe e6 ....?....n%..m.. 00:22:29.510 00000260 23 e8 44 b8 92 fd 9e 17 28 67 69 79 17 57 26 c8 #.D.....(giy.W&. 00:22:29.510 00000270 a5 6d b9 aa ed ff 21 01 52 c3 98 34 ca 12 69 17 .m....!.R..4..i. 00:22:29.510 00000280 a0 2b 50 08 2f 02 68 39 2f c7 42 4d 1c f3 1b a1 .+P./.h9/.BM.... 00:22:29.510 00000290 84 a0 f2 1b 50 b4 c1 f0 da c4 2a 77 05 0b fe 9c ....P.....*w.... 00:22:29.510 000002a0 ea d6 9f 2f c1 98 e0 cf dd 1d 08 f1 15 80 e7 b3 .../............ 00:22:29.511 000002b0 5b 22 78 1f 07 6c 0f 6a 3f 78 a7 2a 40 d9 7a 60 ["x..l.j?x.*@.z` 00:22:29.511 000002c0 40 e7 5e b9 3b 2d 63 0f ce 93 60 38 05 26 0d cb @.^.;-c...`8.&.. 00:22:29.511 000002d0 39 03 09 c4 a3 0e 29 9e fb ed 1c 2e 77 13 f1 23 9.....).....w..# 00:22:29.511 000002e0 ab ab b9 ff 56 1c b9 9c c2 98 42 47 a1 b4 ac 10 ....V.....BG.... 00:22:29.511 000002f0 60 e8 87 d4 52 ad 5d a7 47 18 63 11 cc 4f a9 0a `...R.].G.c..O.. 00:22:29.511 host pubkey: 00:22:29.511 00000000 7e ff 67 c8 27 b6 5e 69 f0 53 04 bc 72 1c 59 3b ~.g.'.^i.S..r.Y; 00:22:29.511 00000010 bc 72 c3 0a dd df 2b 1d cd a5 3e 11 ba 94 eb 93 .r....+...>..... 00:22:29.511 00000020 df 82 ca a0 28 d4 13 d7 31 b9 98 d4 84 83 cf 61 ....(...1......a 00:22:29.511 00000030 9e 04 b9 88 0d c1 32 ea 97 d8 b1 0a 34 20 45 4d ......2.....4 EM 00:22:29.511 00000040 0c 5b e3 28 30 0f ca 1b e9 87 54 f2 69 b9 7c a2 .[.(0.....T.i.|. 00:22:29.511 00000050 c0 6e 33 c6 2b 46 e2 1f 01 5d b4 be 31 a4 5b 69 .n3.+F...]..1.[i 00:22:29.511 00000060 86 6b f1 33 c3 08 d6 b1 d5 71 b9 b8 2a 3c b6 f0 .k.3.....q..*<.. 00:22:29.511 00000070 c1 64 75 5b 4b 15 c3 fd 75 79 40 be 92 42 d6 8b .du[K...uy@..B.. 00:22:29.511 00000080 43 bc 44 1b 1f 9b 8d ec 33 09 c1 38 56 4d f5 45 C.D.....3..8VM.E 00:22:29.511 00000090 86 ff ff 8a 79 71 60 2b c7 90 1e 09 42 fa 8a 73 ....yq`+....B..s 00:22:29.511 000000a0 a9 91 db d9 a5 00 89 cb 43 88 98 a9 08 b7 75 f4 ........C.....u. 00:22:29.511 000000b0 8d 05 8a 7a da 4c 58 18 46 63 6f c5 ed 00 85 04 ...z.LX.Fco..... 00:22:29.511 000000c0 8d 14 4e b6 38 a6 8d 0d c5 1d 29 90 b7 cc fd 88 ..N.8.....)..... 00:22:29.511 000000d0 64 00 39 a2 de 38 81 cd 4f a8 52 70 47 32 8a 33 d.9..8..O.RpG2.3 00:22:29.511 000000e0 ad 93 0a c7 e0 49 dd 28 e5 d0 1d d0 ee 16 bb e9 .....I.(........ 00:22:29.511 000000f0 fb 34 80 53 f5 b5 6b e5 b1 ce 33 0f 83 a6 1c ae .4.S..k...3..... 00:22:29.511 00000100 f7 90 39 26 4b 69 eb 3d 7f a6 1a a8 f1 13 2e a7 ..9&Ki.=........ 00:22:29.511 00000110 31 3e ec 6e c7 50 3d 23 f3 a6 84 e6 3a 2e de 5b 1>.n.P=#....:..[ 00:22:29.511 00000120 cc 24 70 d4 9b aa 86 77 b5 4b 98 e9 a5 0e 39 1b .$p....w.K....9. 00:22:29.511 00000130 56 22 34 b8 ec 78 37 a7 98 c0 a9 a8 67 4f ac f1 V"4..x7.....gO.. 00:22:29.511 00000140 4e 70 37 cc f3 68 aa fc be f2 b6 6b 10 ae 9b 3a Np7..h.....k...: 00:22:29.511 00000150 a5 76 fe 42 10 75 06 5f e6 13 fe b7 8f 2c 92 0d .v.B.u._.....,.. 00:22:29.511 00000160 20 e1 10 40 8a 98 08 45 fd da 86 06 f5 45 3a 11 ..@...E.....E:. 00:22:29.511 00000170 47 d8 06 df ea 48 97 84 a3 dd 63 a4 e2 1d 2c 22 G....H....c...," 00:22:29.511 00000180 ff 77 b8 2a 5e 24 7c 7b dc f7 c4 45 0f 15 b7 11 .w.*^$|{...E.... 00:22:29.511 00000190 fd 64 9d 41 74 82 9e 86 1c 90 f1 e5 18 74 83 23 .d.At........t.# 00:22:29.511 000001a0 0b 4a d0 a6 1b 51 bf 67 1f da 0f df 1c 77 f7 68 .J...Q.g.....w.h 00:22:29.511 000001b0 c0 48 54 d0 7a ff d3 66 e1 1a cf 99 44 d8 e5 92 .HT.z..f....D... 00:22:29.511 000001c0 84 fb 79 5b 0b a0 22 ef 8a 7c 03 64 f0 eb 20 c3 ..y[.."..|.d.. . 00:22:29.511 000001d0 e6 94 7b 78 9b 70 f4 38 bb f9 e4 2e ce 43 d7 0e ..{x.p.8.....C.. 00:22:29.511 000001e0 6d 0b 38 9d 53 1a 70 d7 8f 4c 40 8c 32 b9 10 1f m.8.S.p..L@.2... 00:22:29.511 000001f0 e8 5c 14 2e a1 5b 61 bf e0 1c 2c aa 15 f1 0b 81 .\...[a...,..... 00:22:29.511 00000200 56 41 9a 4b 07 ad 20 fe c1 c9 d6 4f d9 b2 db 67 VA.K.. ....O...g 00:22:29.511 00000210 05 5d 0e ce 1f 3a 28 6a 8f 8f b0 dd b2 22 25 80 .]...:(j....."%. 00:22:29.511 00000220 64 bc 1d 73 a6 df a4 e1 46 18 f5 2b bf 1a e1 be d..s....F..+.... 00:22:29.511 00000230 b9 e9 8e b9 24 64 6d 5d 69 95 28 4a 57 a8 3f 98 ....$dm]i.(JW.?. 00:22:29.511 00000240 dd d8 2e 14 14 03 75 f9 20 66 92 f1 ef 9a da 6c ......u. f.....l 00:22:29.511 00000250 68 1c 86 30 39 87 ad 1e 7a d8 32 81 d8 c9 50 cd h..09...z.2...P. 00:22:29.511 00000260 75 b7 61 af e9 d2 aa 74 8d c4 43 80 3c 33 eb 5e u.a....t..C.<3.^ 00:22:29.511 00000270 c0 ce 4e b7 df 97 9f ff d6 57 6e 16 54 e5 2a 73 ..N......Wn.T.*s 00:22:29.511 00000280 b7 0d 05 4a 32 99 91 fb 2d 97 a9 47 b2 b2 68 cc ...J2...-..G..h. 00:22:29.511 00000290 2e 9c 7e 4d 51 fb 99 87 0e b6 da 24 c7 ec d8 0f ..~MQ......$.... 00:22:29.511 000002a0 59 d9 b6 f5 d2 c4 0f 3b f8 59 bc 49 fb 10 2a 96 Y......;.Y.I..*. 00:22:29.511 000002b0 a1 ec 20 41 e5 81 7e fb 34 68 c6 fc 0a c9 87 65 .. A..~.4h.....e 00:22:29.511 000002c0 b7 ba 07 50 7b b5 f0 5c cc 78 e1 a1 79 9a dd 02 ...P{..\.x..y... 00:22:29.511 000002d0 aa bc 76 3f b8 47 c6 c8 02 bf 8d 6a c6 68 aa 8b ..v?.G.....j.h.. 00:22:29.511 000002e0 23 b1 00 f4 c7 75 34 75 f9 c8 a2 b6 44 a5 0e 4b #....u4u....D..K 00:22:29.511 000002f0 29 3c 2e 3c 1d 90 c0 49 63 88 ba e8 d7 ce 52 fc )<.<...Ic.....R. 00:22:29.511 dh secret: 00:22:29.511 00000000 8b 58 e4 1f bb 0b 41 9c 7c b9 15 d5 d4 8f 36 20 .X....A.|.....6 00:22:29.511 00000010 e6 c3 14 ec 54 2e 92 ec 58 de 1c 81 e7 43 e2 13 ....T...X....C.. 00:22:29.511 00000020 14 38 46 9c 7e b5 80 a6 78 d5 2f 7c 63 94 80 ce .8F.~...x./|c... 00:22:29.511 00000030 f6 96 ad 34 2f ed 41 3b 66 ce be ca 8e e1 3b 42 ...4/.A;f.....;B 00:22:29.511 00000040 8b 56 30 63 17 2c c9 ca 21 ef af 56 58 7f 53 52 .V0c.,..!..VX.SR 00:22:29.511 00000050 a0 31 cd ce 53 0a 2a d2 7b 1f be 5c fe ef b6 9c .1..S.*.{..\.... 00:22:29.511 00000060 7f 1b 39 54 32 4a a3 bb 75 39 68 ef 88 2c ce be ..9T2J..u9h..,.. 00:22:29.511 00000070 db 15 51 c6 43 76 cc 41 37 d4 61 0f 3b d1 1a 2e ..Q.Cv.A7.a.;... 00:22:29.511 00000080 ec 40 db e6 74 f5 23 92 2c f1 fb ff 7d 7e 11 5a .@..t.#.,...}~.Z 00:22:29.511 00000090 c5 6c 87 e6 e5 1d 1b f0 15 03 10 0b 91 e0 ab df .l.............. 00:22:29.511 000000a0 d3 7f 2b f4 31 15 95 7f cd 81 ee 38 55 ec 0b a0 ..+.1......8U... 00:22:29.511 000000b0 55 45 af b3 7c 76 2d a7 7d 81 f0 47 ef 51 de 49 UE..|v-.}..G.Q.I 00:22:29.511 000000c0 24 f8 87 44 01 32 7a f6 15 10 78 30 3e 39 d5 c6 $..D.2z...x0>9.. 00:22:29.511 000000d0 8d cc c2 b7 e0 3e 32 c6 6e ff ef e5 5e a4 bd db .....>2.n...^... 00:22:29.511 000000e0 fd bb d4 86 06 1a 02 55 e4 37 50 56 c9 6a e3 d7 .......U.7PV.j.. 00:22:29.511 000000f0 23 ee ca 1f 3a 48 90 2a f6 76 55 9d 55 bd b9 96 #...:H.*.vU.U... 00:22:29.511 00000100 44 b5 1c 8f b4 0f 3b 32 65 74 3e 1a d5 45 70 d0 D.....;2et>..Ep. 00:22:29.511 00000110 f3 72 d7 1d dc 8a 41 5d d6 e9 d3 ea df b2 d9 29 .r....A].......) 00:22:29.511 00000120 8f a5 c3 4d 92 e6 54 e2 da ac 2c ed cd 81 8e e5 ...M..T...,..... 00:22:29.511 00000130 47 52 3b 3d e6 49 58 11 89 5f c6 65 c0 0f f9 4f GR;=.IX.._.e...O 00:22:29.511 00000140 9f 43 82 ca d4 c8 b7 e0 90 7d 08 8b 01 88 63 ae .C.......}....c. 00:22:29.511 00000150 cd 40 86 d3 f0 66 3d 0d 05 ee b3 d6 5e 3c a4 2f .@...f=.....^<./ 00:22:29.511 00000160 3f e3 e6 58 3c 29 89 21 ff 26 85 74 e4 08 06 4d ?..X<).!.&.t...M 00:22:29.511 00000170 5c 0f a1 9e 68 4e ff 3f 19 cc 76 cc af cb f0 a7 \...hN.?..v..... 00:22:29.511 00000180 c5 6d 99 99 49 13 02 45 70 8a d6 59 a4 b9 7e 30 .m..I..Ep..Y..~0 00:22:29.511 00000190 b2 b4 7e 0d 08 25 0d 48 a1 eb 0b a0 8b 51 37 48 ..~..%.H.....Q7H 00:22:29.511 000001a0 a4 c2 63 70 47 87 3a ea 3c 6d c5 bf 76 19 cd c4 ..cpG.:. 00:22:29.511 000001e0 6a f3 ce fe a7 85 fa a2 4b 43 fd c6 b9 90 e2 0c j.......KC...... 00:22:29.511 000001f0 8b f3 b6 e8 e5 26 79 83 5c 96 ab d1 ae bd 95 84 .....&y.\....... 00:22:29.511 00000200 a0 fb 2e f6 fa d0 15 ad 7f 05 6e fa 94 b1 a6 b6 ..........n..... 00:22:29.511 00000210 bf c2 fa 6d b9 d7 16 aa 37 ff ba 09 23 3d f4 73 ...m....7...#=.s 00:22:29.511 00000220 c3 19 22 44 6f 55 76 1a 85 7c 5a fd b3 37 51 f4 .."DoUv..|Z..7Q. 00:22:29.511 00000230 27 81 7f de 27 12 46 c3 17 3d ba 17 e9 f5 8f e0 '...'.F..=...... 00:22:29.511 00000240 b2 ec 63 05 7b 9f 79 ce f4 61 0b 05 67 9a 70 52 ..c.{.y..a..g.pR 00:22:29.511 00000250 02 77 ec 8f b0 ac 47 0f eb bf 2a 87 de b7 7f 41 .w....G...*....A 00:22:29.511 00000260 3e bc fd e4 f0 6c c6 7a c8 5e eb 7d 58 79 9f 61 >....l.z.^.}Xy.a 00:22:29.511 00000270 2c fb 70 d9 3d 9f 60 ac 9a d6 cd f8 bd 20 70 59 ,.p.=.`...... pY 00:22:29.511 00000280 7d 38 e2 b8 ec b0 af 24 98 6b 81 01 a7 4b 52 dc }8.....$.k...KR. 00:22:29.511 00000290 19 72 30 a3 29 be 88 59 99 7e 4c 9c 30 12 6f fd .r0.)..Y.~L.0.o. 00:22:29.511 000002a0 2e 12 b9 3f ba e8 10 67 fb e0 76 f1 48 57 14 0e ...?...g..v.HW.. 00:22:29.511 000002b0 b3 6c f5 4c 3b dc 76 e8 be 13 49 4e 04 41 44 47 .l.L;.v...IN.ADG 00:22:29.511 000002c0 05 87 f8 51 b1 cd de be 55 3a 10 8c 70 84 81 21 ...Q....U:..p..! 00:22:29.511 000002d0 e5 2c 77 c0 d1 d5 97 d8 6a 00 f5 89 bd 33 54 dd .,w.....j....3T. 00:22:29.511 000002e0 8d fd 42 3b 19 1a 78 06 4a 0c 98 0b 2c 81 46 2f ..B;..x.J...,.F/ 00:22:29.511 000002f0 c4 e9 10 5e f6 ab dc 1f 46 1f fe ee 8c 3e f2 7c ...^....F....>.| 00:22:29.511 [2024-09-27 13:27:16.877458] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key3, hash=2, dhgroup=4, seq=3775755259, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.511 [2024-09-27 13:27:16.877878] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.511 [2024-09-27 13:27:16.927575] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.511 [2024-09-27 13:27:16.928098] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.511 [2024-09-27 13:27:16.928373] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.511 [2024-09-27 13:27:16.928622] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.511 [2024-09-27 13:27:16.980675] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.511 [2024-09-27 13:27:16.980877] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.511 [2024-09-27 13:27:16.981181] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.511 [2024-09-27 13:27:16.981368] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.511 [2024-09-27 13:27:16.981788] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.511 ctrlr pubkey: 00:22:29.511 00000000 9d 41 99 b8 61 64 75 52 6e 8d 04 8d 4b 12 08 24 .A..aduRn...K..$ 00:22:29.511 00000010 8d 62 de 80 02 47 e4 94 99 96 97 13 f2 f4 82 10 .b...G.......... 00:22:29.511 00000020 d6 7a 6b a5 22 74 56 9d 60 d1 e0 15 5d f1 d2 b7 .zk."tV.`...]... 00:22:29.511 00000030 e6 75 4a 30 fd 48 ff 84 24 7a ca 97 55 ef 0b 9d .uJ0.H..$z..U... 00:22:29.512 00000040 4f 16 a0 55 28 b9 fc 31 39 2f 39 e4 68 8c 99 86 O..U(..19/9.h... 00:22:29.512 00000050 9d de f2 a7 94 be f7 44 69 de f3 c0 f9 06 78 86 .......Di.....x. 00:22:29.512 00000060 a8 93 11 6f ac c7 f1 de 5c f4 cc e0 d1 84 3f 14 ...o....\.....?. 00:22:29.512 00000070 30 e8 5b 07 26 4e de 72 45 0f 92 6d f7 e2 9d 0f 0.[.&N.rE..m.... 00:22:29.512 00000080 5d 5b ff ec cf f4 d8 8d 54 f8 fc 3f 2c b5 45 e1 ][......T..?,.E. 00:22:29.512 00000090 d6 8a 95 17 39 7c 79 85 76 7e 31 5d 97 0f fe 03 ....9|y.v~1].... 00:22:29.512 000000a0 89 60 b6 e5 36 41 a6 94 e9 51 d4 d5 46 04 c7 3d .`..6A...Q..F..= 00:22:29.512 000000b0 c4 f0 b6 2f 13 4a bb de 29 ca ae 1d d1 fa de f6 .../.J..)....... 00:22:29.512 000000c0 41 e8 7d 39 b0 03 ce f6 82 83 87 88 05 e6 2f b7 A.}9........../. 00:22:29.512 000000d0 1a 69 7e 1d 1a e1 10 51 4e 3e 70 f9 4b b0 de 66 .i~....QN>p.K..f 00:22:29.512 000000e0 55 f6 81 ea b9 42 26 dc 42 70 c6 39 82 c2 e3 46 U....B&.Bp.9...F 00:22:29.512 000000f0 a2 9a 43 cb a1 17 95 97 16 1e 42 65 c5 4d 11 66 ..C.......Be.M.f 00:22:29.512 00000100 08 9a 20 3b 5f fb 40 4f 06 48 d9 ce 58 e4 8c bd .. ;_.@O.H..X... 00:22:29.512 00000110 32 86 65 cf 76 bd be f7 b0 18 15 ea f7 a5 6e 21 2.e.v.........n! 00:22:29.512 00000120 f3 27 c7 d0 47 34 59 a9 9c 16 de 2d 4a 6e 9f e5 .'..G4Y....-Jn.. 00:22:29.512 00000130 40 f5 3a 3e 9d b1 10 54 88 9b 63 17 c1 67 a1 a1 @.:>...T..c..g.. 00:22:29.512 00000140 9e b0 2a 11 41 38 ff f7 0c fb a1 bd c1 7c 9f e0 ..*.A8.......|.. 00:22:29.512 00000150 67 14 93 29 3c 6d 27 55 8f 63 94 af 57 10 72 3f g..).kZ..8... 00:22:29.512 000001a0 c6 52 47 8b 9c e4 8c 13 1e 67 ee 48 43 97 92 5c .RG......g.HC..\ 00:22:29.512 000001b0 07 9d 0a b0 29 71 0b a0 9a ca c3 55 95 4f 8f 6c ....)q.....U.O.l 00:22:29.512 000001c0 c3 d8 e6 50 68 04 ea 21 04 0e b5 73 3d 01 e7 4a ...Ph..!...s=..J 00:22:29.512 000001d0 7f 26 1d 48 f0 66 cf c8 01 f4 b8 b9 f6 bc 1e a5 .&.H.f.......... 00:22:29.512 000001e0 c8 ab 75 38 a6 59 f3 3e d3 b1 06 18 c2 14 58 cb ..u8.Y.>......X. 00:22:29.512 000001f0 5f 18 73 d1 74 25 61 67 6b b6 42 70 2f 6a ab a8 _.s.t%agk.Bp/j.. 00:22:29.512 00000200 6c d7 71 14 52 ef f2 d6 1b b3 ed 84 2a 73 a7 28 l.q.R.......*s.( 00:22:29.512 00000210 8c 2b 9f 63 4e 05 31 a2 fb 47 c4 a9 15 18 c2 ca .+.cN.1..G...... 00:22:29.512 00000220 ff 44 bf 11 00 12 ad 65 d6 58 f0 ac 10 9b f6 5d .D.....e.X.....] 00:22:29.512 00000230 0a 3e f6 ce 1f 0f cf ac 04 b2 7b 6c 39 af 79 74 .>........{l9.yt 00:22:29.512 00000240 6a 04 27 c9 27 00 06 44 5e f3 6e 84 67 0a 88 15 j.'.'..D^.n.g... 00:22:29.512 00000250 0d 0a bd fe 3f a4 17 00 93 6e 25 04 a3 6d fe e6 ....?....n%..m.. 00:22:29.512 00000260 23 e8 44 b8 92 fd 9e 17 28 67 69 79 17 57 26 c8 #.D.....(giy.W&. 00:22:29.512 00000270 a5 6d b9 aa ed ff 21 01 52 c3 98 34 ca 12 69 17 .m....!.R..4..i. 00:22:29.512 00000280 a0 2b 50 08 2f 02 68 39 2f c7 42 4d 1c f3 1b a1 .+P./.h9/.BM.... 00:22:29.512 00000290 84 a0 f2 1b 50 b4 c1 f0 da c4 2a 77 05 0b fe 9c ....P.....*w.... 00:22:29.512 000002a0 ea d6 9f 2f c1 98 e0 cf dd 1d 08 f1 15 80 e7 b3 .../............ 00:22:29.512 000002b0 5b 22 78 1f 07 6c 0f 6a 3f 78 a7 2a 40 d9 7a 60 ["x..l.j?x.*@.z` 00:22:29.512 000002c0 40 e7 5e b9 3b 2d 63 0f ce 93 60 38 05 26 0d cb @.^.;-c...`8.&.. 00:22:29.512 000002d0 39 03 09 c4 a3 0e 29 9e fb ed 1c 2e 77 13 f1 23 9.....).....w..# 00:22:29.512 000002e0 ab ab b9 ff 56 1c b9 9c c2 98 42 47 a1 b4 ac 10 ....V.....BG.... 00:22:29.512 000002f0 60 e8 87 d4 52 ad 5d a7 47 18 63 11 cc 4f a9 0a `...R.].G.c..O.. 00:22:29.512 host pubkey: 00:22:29.512 00000000 c7 84 5d 55 dc dc 12 83 9d e9 b7 de 1a a4 0c 9c ..]U............ 00:22:29.512 00000010 ce e7 65 65 fe 1c 7d b1 7b 10 6e 6d 66 2f 91 81 ..ee..}.{.nmf/.. 00:22:29.512 00000020 22 e5 b7 e1 32 29 c4 2f f4 17 5b 32 4a a9 0b d4 "...2)./..[2J... 00:22:29.512 00000030 1c c7 ef 51 7e 80 27 23 35 be 71 e2 40 8e f5 86 ...Q~.'#5.q.@... 00:22:29.512 00000040 de 77 fa 36 d5 8e 89 62 80 01 45 b1 c3 74 03 3f .w.6...b..E..t.? 00:22:29.512 00000050 d5 df 39 bf 87 b5 61 08 2f 04 83 08 14 38 54 a8 ..9...a./....8T. 00:22:29.512 00000060 fd 7d 15 55 6c 9e 2a 92 b4 f8 67 dc ee 34 98 bb .}.Ul.*...g..4.. 00:22:29.512 00000070 dc 04 5f 86 20 ce ee 15 0c 01 31 6e d0 03 42 df .._. .....1n..B. 00:22:29.512 00000080 56 0b 72 0d 62 b6 0e f4 27 9f 83 47 56 10 ac ef V.r.b...'..GV... 00:22:29.512 00000090 68 44 bb 65 96 ab 06 8e c0 3e 3d 03 c4 15 22 aa hD.e.....>=...". 00:22:29.512 000000a0 2e 76 15 15 ab cf 50 f0 32 f6 34 c4 df 3b 02 70 .v....P.2.4..;.p 00:22:29.512 000000b0 d7 d1 78 ec 44 b0 f4 0f 05 b8 48 43 f5 1e d4 cd ..x.D.....HC.... 00:22:29.512 000000c0 0b a0 81 63 dc 7f 80 20 0f f1 1f ef 54 04 3a d2 ...c... ....T.:. 00:22:29.512 000000d0 1e e9 57 e0 33 16 b0 e6 bb fa 85 fc 5f ba f2 e7 ..W.3......._... 00:22:29.512 000000e0 20 72 a4 a8 d1 aa ba 90 f3 89 ea 95 e5 a8 ad 64 r.............d 00:22:29.512 000000f0 3c ff ff 99 f3 ac 8c 7e cf b3 3f 07 52 f7 c8 84 <......~..?.R... 00:22:29.512 00000100 f8 99 2a 5e fd b5 22 81 9b b1 23 78 17 06 12 35 ..*^.."...#x...5 00:22:29.512 00000110 13 b9 dc 9a 54 a8 f6 7e 5f 12 bb af 0d 9d 51 1f ....T..~_.....Q. 00:22:29.512 00000120 3b 51 c9 3a 71 8c 31 42 66 a8 ae 7c 2f ca dc bc ;Q.:q.1Bf..|/... 00:22:29.512 00000130 43 4d 11 7e 22 78 83 4f 1b cd 69 c1 3d a5 93 16 CM.~"x.O..i.=... 00:22:29.512 00000140 db 0d 4d 39 30 60 57 f0 f6 c3 51 7e 3c 82 88 a8 ..M90`W...Q~<... 00:22:29.512 00000150 01 0f be e2 75 96 4f 78 eb 08 54 e2 94 33 84 10 ....u.Ox..T..3.. 00:22:29.512 00000160 8d b1 49 87 64 87 60 96 2e 70 72 c8 bd ad 11 11 ..I.d.`..pr..... 00:22:29.512 00000170 55 7b d4 3e 67 a7 d5 24 79 bc c4 f9 19 02 f3 bb U{.>g..$y....... 00:22:29.512 00000180 22 a9 ab f1 af 94 00 24 ec 93 10 c2 0c 58 5e ee "......$.....X^. 00:22:29.512 00000190 79 88 54 db 85 f5 b6 45 fa 43 b4 7f 11 c8 1c d8 y.T....E.C...... 00:22:29.512 000001a0 c2 8a 2a dc 3d b7 13 aa 24 57 22 f4 20 b7 95 d8 ..*.=...$W". ... 00:22:29.512 000001b0 9d c8 0b 57 8b 5e 3a 8d df ba aa 16 58 c8 ab fa ...W.^:.....X... 00:22:29.512 000001c0 08 29 12 78 90 69 59 0f 41 72 bc 24 d8 99 bf 42 .).x.iY.Ar.$...B 00:22:29.512 000001d0 e3 7b 9b c1 dc d9 d9 0f 6b d8 e5 0c 7d 41 f0 71 .{......k...}A.q 00:22:29.512 000001e0 a4 f3 62 77 83 f0 23 00 ea fa 82 33 d5 7c ac 64 ..bw..#....3.|.d 00:22:29.512 000001f0 82 b5 2b df b4 87 10 32 95 84 36 d5 b2 e1 9d d8 ..+....2..6..... 00:22:29.512 00000200 3b 46 d8 c9 80 70 c6 3a fd 32 30 c2 35 67 e2 72 ;F...p.:.20.5g.r 00:22:29.512 00000210 dc 55 ef a2 94 8d 26 ab 0e 8f ef 01 1c c6 44 05 .U....&.......D. 00:22:29.512 00000220 2a 6b 12 fa 47 3e 44 e6 69 b6 30 4c f9 df 00 00 *k..G>D.i.0L.... 00:22:29.512 00000230 a1 42 0e d3 74 52 ae f5 fa 80 a0 10 08 ce ef 0f .B..tR.......... 00:22:29.512 00000240 a4 81 4c 30 69 99 14 15 7e e7 42 ed 95 ef 45 34 ..L0i...~.B...E4 00:22:29.512 00000250 a4 a8 bf c5 23 60 02 ca 91 3e c6 52 ce 66 82 be ....#`...>.R.f.. 00:22:29.512 00000260 2f 18 3a 2b cb aa d5 62 dc 8f 20 ff f5 75 68 b6 /.:+...b.. ..uh. 00:22:29.512 00000270 0f 4a 5f 0c d7 67 a2 e6 54 81 27 56 cc 48 8d 27 .J_..g..T.'V.H.' 00:22:29.512 00000280 ba 42 d3 81 48 c6 9f a0 51 0c 9b 9f 45 3c fc a2 .B..H...Q...E<.. 00:22:29.512 00000290 82 18 7b 4c ba 3e d0 0d a4 f9 cd 88 0e 31 58 11 ..{L.>.......1X. 00:22:29.512 000002a0 cd 48 0e 6e fe 1d b4 8d 11 5a e4 80 b8 62 02 bd .H.n.....Z...b.. 00:22:29.512 000002b0 47 97 bb 03 18 32 21 bc 54 0e 71 d2 e8 51 08 cc G....2!.T.q..Q.. 00:22:29.512 000002c0 5d c5 20 a9 75 c5 26 02 e0 36 7b 2c 3f 0b 82 91 ]. .u.&..6{,?... 00:22:29.512 000002d0 80 38 0f fd d4 20 dd 6a 5a 80 23 19 e5 5e ce e4 .8... .jZ.#..^.. 00:22:29.512 000002e0 8d 70 03 d5 04 1a 70 b3 c5 7d 85 60 c4 a0 ef ae .p....p..}.`.... 00:22:29.512 000002f0 0e e1 95 4f 12 10 a9 1d e5 85 90 e3 b3 fb d5 83 ...O............ 00:22:29.512 dh secret: 00:22:29.512 00000000 65 c1 e0 28 3b 60 ee 3f ac bc a9 07 3f cd 42 e3 e..(;`.?....?.B. 00:22:29.512 00000010 38 cb 2e ce 57 92 22 b0 ce cb b9 dd 52 4f e8 7a 8...W.".....RO.z 00:22:29.512 00000020 c1 9e a4 74 c2 47 2c 55 4e a8 99 83 d9 82 d0 45 ...t.G,UN......E 00:22:29.512 00000030 40 ff 1d 93 ba 12 44 3d af 77 03 3a e3 ad fe f9 @.....D=.w.:.... 00:22:29.512 00000040 e0 80 dd ec db d1 85 c9 a1 37 ec a6 2c 19 ad d3 .........7..,... 00:22:29.512 00000050 fb 55 88 9e d8 6f ef 09 95 6f 58 6b 43 42 f4 34 .U...o...oXkCB.4 00:22:29.512 00000060 17 94 5e 1d c8 e8 56 61 54 36 98 3b 61 27 97 6c ..^...VaT6.;a'.l 00:22:29.512 00000070 1f c3 77 03 61 98 36 b0 94 59 32 10 8b 12 35 ac ..w.a.6..Y2...5. 00:22:29.512 00000080 5b ce 21 a5 d8 2d 3c f2 86 c9 e2 38 62 f7 b9 ee [.!..-<....8b... 00:22:29.512 00000090 84 07 17 6e 79 06 24 50 1b a7 90 35 ee 0b 52 04 ...ny.$P...5..R. 00:22:29.512 000000a0 50 ce 44 94 ee f6 1d 48 40 af ab 17 aa ca 79 cb P.D....H@.....y. 00:22:29.512 000000b0 91 ab c4 b3 61 28 77 73 4e 76 e1 44 d1 c3 46 bb ....a(wsNv.D..F. 00:22:29.512 000000c0 48 db 0b bb e0 ad ff c1 0e 82 26 08 f6 a3 eb ef H.........&..... 00:22:29.512 000000d0 70 23 a9 3a b9 b6 5c 33 11 eb d7 38 53 9c 78 1b p#.:..\3...8S.x. 00:22:29.512 000000e0 70 c7 b3 78 02 de 03 31 2e 82 32 7c a5 31 79 1c p..x...1..2|.1y. 00:22:29.512 000000f0 1d 22 65 9e 6c 52 ae d6 37 a3 61 ff f2 05 c0 80 ."e.lR..7.a..... 00:22:29.512 00000100 82 97 58 0c 96 b3 31 80 1b fa 89 74 86 c4 22 77 ..X...1....t.."w 00:22:29.512 00000110 ac 57 1b fd 2b 8c de a3 d4 1c ff 96 d0 f1 a6 1e .W..+........... 00:22:29.512 00000120 a7 5e 9b 6b 22 7e 35 36 16 e0 6b 99 29 2a b6 bb .^.k"~56..k.)*.. 00:22:29.512 00000130 fd 2e b6 eb 23 9b 4f 2f d1 e1 95 02 60 66 ee 51 ....#.O/....`f.Q 00:22:29.512 00000140 0b b2 36 38 d4 ca b4 48 38 d4 ff f0 2a ad 19 ff ..68...H8...*... 00:22:29.512 00000150 dc af c8 01 49 21 f8 d9 50 54 21 83 e4 f7 9b fc ....I!..PT!..... 00:22:29.512 00000160 49 d5 ae a3 5a 5e 67 c1 51 73 f7 fb 70 97 9a 52 I...Z^g.Qs..p..R 00:22:29.512 00000170 60 a2 52 01 36 28 a8 34 1f b0 c6 1a 39 43 75 38 `.R.6(.4....9Cu8 00:22:29.512 00000180 90 67 49 7b ed 98 1b 85 62 55 18 7c 92 3b 0f cd .gI{....bU.|.;.. 00:22:29.512 00000190 93 da db 67 08 cc ea 27 7c 00 9a cd 06 5d bd 1c ...g...'|....].. 00:22:29.512 000001a0 d8 ec b2 03 a4 88 9e 75 3c 0e 42 75 0b a2 dd 34 .......u<.Bu...4 00:22:29.512 000001b0 88 e0 94 68 8c b3 b0 41 ff 75 f3 38 01 1a 9b 01 ...h...A.u.8.... 00:22:29.512 000001c0 33 9a 3c a7 d2 ca 50 a4 61 e1 2f c9 07 c4 4a fa 3.<...P.a./...J. 00:22:29.512 000001d0 72 b4 0b 8a b2 c4 94 ea c3 cf ef 5e 98 78 f9 da r..........^.x.. 00:22:29.512 000001e0 0a f8 23 02 36 3e 92 53 f7 d6 fb 91 12 b6 cc dd ..#.6>.S........ 00:22:29.512 000001f0 87 ba a3 35 d6 d7 af a6 e4 58 bf 38 d4 fc 5a ac ...5.....X.8..Z. 00:22:29.512 00000200 f6 40 99 77 57 ee ef 6e 31 c9 dd 28 78 88 a2 be .@.wW..n1..(x... 00:22:29.512 00000210 b2 a9 66 4c 51 36 db cb 03 95 4d 02 a4 e3 a4 ab ..fLQ6....M..... 00:22:29.512 00000220 8a 52 bc 70 87 cc a0 a9 4c b9 5d 9b 99 56 98 c1 .R.p....L.]..V.. 00:22:29.512 00000230 71 af 73 bc 60 c8 e3 ea 85 62 dd 57 9f 46 a4 34 q.s.`....b.W.F.4 00:22:29.512 00000240 f4 eb 4d c0 11 ad 41 09 a0 b6 e6 c7 06 af 6f 1b ..M...A.......o. 00:22:29.512 00000250 1e 0e 0d a3 af fb 04 1c 36 d6 81 38 46 41 c1 00 ........6..8FA.. 00:22:29.512 00000260 c7 45 15 2f e8 f3 a4 f4 c4 2a 13 3f 15 2b 03 8c .E./.....*.?.+.. 00:22:29.512 00000270 aa 85 34 86 59 cc 95 94 79 43 83 5f a5 37 26 e6 ..4.Y...yC._.7&. 00:22:29.512 00000280 8f 04 26 b0 e0 ef f2 e0 ab 16 d4 b6 51 41 77 89 ..&.........QAw. 00:22:29.512 00000290 be 7c 8b 57 44 0c 17 d2 93 0c 2a 14 34 70 4f 47 .|.WD.....*.4pOG 00:22:29.512 000002a0 8f a0 44 37 63 99 9e d7 ab c1 a2 c3 fc bc 72 59 ..D7c.........rY 00:22:29.512 000002b0 6d 15 aa d4 1b fa 14 00 9a 45 dc a2 55 76 b7 81 m........E..Uv.. 00:22:29.512 000002c0 2d 57 d2 26 40 5a fe b1 c7 26 5b 7d 4a 0a 24 01 -W.&@Z...&[}J.$. 00:22:29.512 000002d0 3c 39 69 a4 4e b4 8f 73 36 1c da ab 72 35 5b 4b <9i.N..s6...r5[K 00:22:29.512 000002e0 2b d8 d4 b3 a0 2a 21 26 07 83 33 76 10 79 e4 f1 +....*!&..3v.y.. 00:22:29.512 000002f0 bf f3 e8 7f b4 b0 ba 78 fe f0 89 80 a9 f9 0e 6a .......x.......j 00:22:29.513 [2024-09-27 13:27:17.055851] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key3, hash=2, dhgroup=4, seq=3775755260, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.513 [2024-09-27 13:27:17.056184] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.513 [2024-09-27 13:27:17.106619] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.513 [2024-09-27 13:27:17.107075] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.513 [2024-09-27 13:27:17.107263] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.513 [2024-09-27 13:27:17.107658] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.513 [2024-09-27 13:27:17.253386] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.513 [2024-09-27 13:27:17.253789] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.513 [2024-09-27 13:27:17.254013] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.513 [2024-09-27 13:27:17.254261] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.513 [2024-09-27 13:27:17.254551] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.513 ctrlr pubkey: 00:22:29.513 00000000 8c aa 8f 5c 90 67 ae 68 99 ae 4d c3 7f c0 fa 36 ...\.g.h..M....6 00:22:29.513 00000010 42 c4 b7 97 b5 75 68 15 be 1f 94 d9 c4 a8 ea 5f B....uh........_ 00:22:29.513 00000020 33 2a 20 58 45 f0 8b d1 b4 0f e1 8a eb fb 49 4d 3* XE.........IM 00:22:29.513 00000030 01 98 80 1f d8 61 76 e0 28 29 61 90 9f c7 f5 a6 .....av.()a..... 00:22:29.513 00000040 5e fc ff 19 ad a3 5e f4 dc 48 b3 65 14 51 6d 6b ^.....^..H.e.Qmk 00:22:29.513 00000050 a7 fa a6 20 08 8f f7 5b a0 24 e2 c3 ba 66 51 c1 ... ...[.$...fQ. 00:22:29.513 00000060 12 28 f8 39 d4 e0 4d a2 48 2a 6b 62 2d f8 51 e1 .(.9..M.H*kb-.Q. 00:22:29.513 00000070 50 02 2f 86 29 cc 8c 93 70 4d 07 47 4d 4a fa 5c P./.)...pM.GMJ.\ 00:22:29.513 00000080 bd 14 24 25 1a 59 e3 e7 10 c9 5d 8c d6 bd 9f 9f ..$%.Y....]..... 00:22:29.513 00000090 6a b6 16 0f 3e 33 d2 4d 36 f9 44 6c fe 40 3f 96 j...>3.M6.Dl.@?. 00:22:29.513 000000a0 a4 dd 61 ff 0c 6d fc ae 43 01 06 46 86 32 35 75 ..a..m..C..F.25u 00:22:29.513 000000b0 78 e0 cd b3 b9 0d e8 17 e9 82 bc f3 f5 c6 e0 17 x............... 00:22:29.513 000000c0 e7 b5 84 9a 75 3e 87 1d 24 62 c8 e1 b6 73 31 70 ....u>..$b...s1p 00:22:29.513 000000d0 27 90 fd 69 72 24 5d 5a fc ad a0 2e d6 c4 2c 7a '..ir$]Z......,z 00:22:29.513 000000e0 91 74 7a c1 98 1b 33 a5 aa 29 ec 96 c7 2d 04 08 .tz...3..)...-.. 00:22:29.513 000000f0 d8 ca e3 91 a5 d0 49 8c fd 44 96 4f 89 0b 8f ce ......I..D.O.... 00:22:29.513 00000100 11 b9 01 87 6a 9b b4 14 fb 29 89 5f cc e9 26 8c ....j....)._..&. 00:22:29.513 00000110 48 a6 9d 55 82 06 1a bd aa a0 8a dd 3d 1b 1c 74 H..U........=..t 00:22:29.513 00000120 ae 8c ad 54 ea 4a a7 68 68 90 15 f4 76 9b b1 1d ...T.J.hh...v... 00:22:29.513 00000130 42 87 5a e2 6c c4 8c 6c 2e 32 d5 95 dc 0b 9c b0 B.Z.l..l.2...... 00:22:29.513 00000140 88 3f 5c 37 6c 83 c0 a9 18 a0 11 54 6c a2 b0 46 .?\7l......Tl..F 00:22:29.513 00000150 c2 37 5b d2 4f ac c4 ab d0 c6 1f 3e 25 4d 8f 66 .7[.O......>%M.f 00:22:29.513 00000160 1f ee ac 24 fe d3 1c bf 89 d6 8d 18 db dd ee 8c ...$............ 00:22:29.513 00000170 ca 37 7b 2c c4 5e 8f aa 6e 76 98 5b ae b8 4d 98 .7{,.^..nv.[..M. 00:22:29.513 00000180 4c 23 2e cb 69 95 04 0e 9d 8a 8e 39 61 f6 d6 eb L#..i......9a... 00:22:29.513 00000190 4d c4 32 61 62 6d 59 97 d7 26 b6 08 05 35 b5 aa M.2abmY..&...5.. 00:22:29.513 000001a0 90 65 54 65 75 22 7d c5 5b 7b 16 f0 ab 30 9e 2e .eTeu"}.[{...0.. 00:22:29.513 000001b0 9e 17 1f 9f da 30 65 5b 61 15 a8 8b db c7 51 7b .....0e[a.....Q{ 00:22:29.513 000001c0 e3 ea 2b 2f fd 3e d1 c0 0b 9a 81 d6 2a 0a 79 ab ..+/.>......*.y. 00:22:29.513 000001d0 60 98 21 31 52 62 b0 10 f5 3c 58 36 a8 24 02 a1 `.!1Rb.......,... 00:22:29.513 000002f0 c6 eb 9b 55 44 4e d3 17 64 76 57 f3 f7 37 4e 23 ...UDN..dvW..7N# 00:22:29.513 host pubkey: 00:22:29.513 00000000 6b 65 a1 5d 0c 0e c9 b1 dd cd 94 77 b5 3c fb d1 ke.].......w.<.. 00:22:29.513 00000010 db 68 85 61 76 96 95 45 8e 22 e6 2c 1b 7d 58 4e .h.av..E.".,.}XN 00:22:29.513 00000020 38 19 de e9 c3 d5 e5 9b 79 87 ef 45 e9 7a 68 38 8.......y..E.zh8 00:22:29.513 00000030 9d 4e ef 9c bc ac 0a 95 26 9b d4 06 52 4d 8f ad .N......&...RM.. 00:22:29.513 00000040 7e 12 95 fb b0 f4 d8 e3 57 bd 06 06 30 76 64 54 ~.......W...0vdT 00:22:29.513 00000050 c4 4b da 6a c2 32 4a dc e1 2e 5d c3 ee 53 cd c4 .K.j.2J...]..S.. 00:22:29.513 00000060 f0 71 da ba 50 f2 6c 50 63 02 7c 04 99 26 b8 cb .q..P.lPc.|..&.. 00:22:29.513 00000070 a1 09 d9 ea b7 82 93 4f be 4a 35 44 ac 1b 0d 11 .......O.J5D.... 00:22:29.513 00000080 46 4b 9b 33 68 00 36 cb 16 ec 27 c6 55 bd 4a 22 FK.3h.6...'.U.J" 00:22:29.513 00000090 8b 0e e6 76 48 38 9c 41 e2 5d 04 1e 05 b8 ef 97 ...vH8.A.]...... 00:22:29.513 000000a0 6c e5 be 54 e9 8b ad 23 36 4c ca ee 05 de 74 c0 l..T...#6L....t. 00:22:29.513 000000b0 cb 10 1f 8c 05 6d 67 dc c6 56 77 08 c3 d8 93 9a .....mg..Vw..... 00:22:29.513 000000c0 2a 39 38 10 db 5e cc d0 75 52 83 f9 d3 3b e7 70 *98..^..uR...;.p 00:22:29.513 000000d0 75 b4 bb 01 7e 70 1d 7e 79 80 6d b5 6f 7d f4 33 u...~p.~y.m.o}.3 00:22:29.513 000000e0 8e 04 9d e3 bf 85 38 3b 4c 6b 46 e4 ef 36 09 f2 ......8;LkF..6.. 00:22:29.513 000000f0 90 20 c6 34 72 cb 32 56 1a e6 7d be e7 34 c2 e3 . .4r.2V..}..4.. 00:22:29.513 00000100 9e fd be ac 3d 31 e9 c3 5b d6 7c 81 c0 81 b1 54 ....=1..[.|....T 00:22:29.513 00000110 b2 82 03 12 49 7a 03 8c 1d 0d 2d 77 f1 66 00 0e ....Iz....-w.f.. 00:22:29.513 00000120 7e de e7 fb 1e 34 06 d4 bb 59 f3 3e 73 bc 00 f1 ~....4...Y.>s... 00:22:29.513 00000130 fe 68 5e 0f 53 8c 9c 5d 63 57 10 63 40 77 28 7a .h^.S..]cW.c@w(z 00:22:29.513 00000140 d1 4d b5 7f 04 20 2a 38 83 70 5f 7e 8b de e9 46 .M... *8.p_~...F 00:22:29.513 00000150 48 3c 34 ab e8 e7 d3 68 e9 4b 29 aa fe 05 cd db H<4....h.K)..... 00:22:29.513 00000160 55 ba e3 bb e4 bb 42 b3 8b 2e 7a 24 10 cb 9e b6 U.....B...z$.... 00:22:29.513 00000170 99 35 fa e0 56 62 15 96 ea df 46 dd 57 a9 3b ba .5..Vb....F.W.;. 00:22:29.513 00000180 c5 5a 56 8e 1a 85 90 d8 a9 ed 3e 22 67 1d 17 7e .ZV.......>"g..~ 00:22:29.513 00000190 b2 bb 4f da 7e e5 af ab 4f 5a 07 a6 0a c6 11 03 ..O.~...OZ...... 00:22:29.513 000001a0 0e 71 51 e3 63 69 b2 27 5f 41 ee 62 03 4a 0d 0b .qQ.ci.'_A.b.J.. 00:22:29.513 000001b0 7c e3 52 db a9 04 e6 7e a5 56 63 2a ef 3f 14 a7 |.R....~.Vc*.?.. 00:22:29.513 000001c0 7c 2b bc f5 c2 39 a5 d8 e5 39 b2 a1 55 cb cf a4 |+...9...9..U... 00:22:29.513 000001d0 77 ab 0d 23 49 93 83 18 81 d9 a6 8d 22 84 10 1d w..#I......."... 00:22:29.513 000001e0 04 bc 01 99 6d 53 80 87 a7 2e 13 e9 5b 8e 48 fc ....mS......[.H. 00:22:29.513 000001f0 cf 18 4e 27 7d 43 5b 92 d2 e7 5d 3d 77 fb 69 52 ..N'}C[...]=w.iR 00:22:29.513 00000200 41 30 0f 11 e5 78 81 16 f5 e8 51 a3 80 ee 13 04 A0...x....Q..... 00:22:29.513 00000210 66 b2 23 f1 f4 e1 34 d6 12 36 e0 c2 e9 28 65 4a f.#...4..6...(eJ 00:22:29.513 00000220 10 a5 c0 33 12 49 e8 b0 be b7 26 63 2a 1a 9d 75 ...3.I....&c*..u 00:22:29.513 00000230 0f aa 67 78 29 77 3e 59 ba 3b 37 62 37 60 9e 28 ..gx)w>Y.;7b7`.( 00:22:29.513 00000240 e2 c3 15 da 4a e6 d6 dd 0f 88 2a 7d 4b cd c3 cb ....J.....*}K... 00:22:29.513 00000250 09 d0 99 4e 4f ae fe 96 e2 5f a5 12 52 2f 9f ff ...NO...._..R/.. 00:22:29.513 00000260 8a 14 d5 ad 97 f2 1b e0 14 ec 5c 0c 09 a8 a2 3b ..........\....; 00:22:29.513 00000270 f3 6f f9 37 ea 55 47 28 d3 c5 9f de 99 cc dc 28 .o.7.UG(.......( 00:22:29.513 00000280 61 3c 84 4f d9 bd 24 ee 4f 48 52 62 af f8 b9 d5 a<.O..$.OHRb.... 00:22:29.513 00000290 35 99 67 36 97 21 a6 50 54 b3 3e a2 b9 1b 96 55 5.g6.!.PT.>....U 00:22:29.513 000002a0 f9 90 b4 4d 05 6b eb b8 e2 8e e6 bc 1c aa 90 2a ...M.k.........* 00:22:29.513 000002b0 19 8f 2d 60 df 0d 5d 0c 06 2d 3e 26 e2 bc 6a 35 ..-`..]..->&..j5 00:22:29.513 000002c0 97 ac ba e9 a8 08 96 71 b8 36 e5 5a 0e df 68 e9 .......q.6.Z..h. 00:22:29.513 000002d0 4b 20 59 f1 1a 8f b7 a3 da 49 30 5b 10 01 08 bc K Y......I0[.... 00:22:29.513 000002e0 75 1c 4d 71 e8 77 12 68 2b e0 65 92 99 04 ea 7b u.Mq.w.h+.e....{ 00:22:29.513 000002f0 b8 0e d7 d9 e1 2b 85 b7 d2 89 97 85 b3 f9 d7 86 .....+.......... 00:22:29.513 dh secret: 00:22:29.513 00000000 8b d5 28 aa 22 43 5d de 58 07 c1 8a 6b 9f e2 1d ..(."C].X...k... 00:22:29.513 00000010 0f eb 30 f2 ed 4a 53 60 6c 6e 4a 1f 8d 32 0e 9c ..0..JS`lnJ..2.. 00:22:29.513 00000020 c7 4e 6e 92 ec 14 05 c7 60 70 cf 88 fc 19 7e 61 .Nn.....`p....~a 00:22:29.513 00000030 96 8f 12 e6 e0 be cd 2b 8d de a9 3e 19 49 ef 65 .......+...>.I.e 00:22:29.513 00000040 1c a7 80 dc 8e ea 68 75 9f 99 f0 a7 da 37 d8 52 ......hu.....7.R 00:22:29.513 00000050 33 24 7c 27 bd 73 da 91 84 64 e2 85 4b b8 4f 70 3$|'.s...d..K.Op 00:22:29.513 00000060 db 2b b6 c3 ad 9c 23 c0 8f df 09 7d f7 31 b6 15 .+....#....}.1.. 00:22:29.513 00000070 06 5d 62 35 61 ce 7b 4c 70 3c 45 9f 6b d2 8d 5b .]b5a.{Lp. 00:22:29.513 00000120 33 d4 c2 8e 68 59 ec c5 1a f1 bc 03 f8 6f 85 93 3...hY.......o.. 00:22:29.513 00000130 15 b7 e0 53 8e 80 34 91 ef 47 d4 c2 5a f4 0b 06 ...S..4..G..Z... 00:22:29.513 00000140 87 dd 90 ed 6d db 85 b5 21 8e bc 33 8e d4 6b 3e ....m...!..3..k> 00:22:29.513 00000150 2b 18 a7 77 ee 5a 2e 08 72 8b ce 89 90 b8 ea b3 +..w.Z..r....... 00:22:29.513 00000160 a3 7d bc 46 0b 74 65 43 23 88 34 fc 96 97 db 41 .}.F.teC#.4....A 00:22:29.513 00000170 52 8b 9b 6b 86 b4 08 af b1 04 bd 1d 89 37 76 c1 R..k.........7v. 00:22:29.514 00000180 4a 9a fe 32 53 06 53 58 60 77 41 39 bd 67 dd e0 J..2S.SX`wA9.g.. 00:22:29.514 00000190 0f c8 ae 09 70 5b 9e 99 48 17 47 d5 c1 64 f3 ba ....p[..H.G..d.. 00:22:29.514 000001a0 61 4e 5a 05 a7 b7 46 20 6d 67 e2 34 28 49 b3 5f aNZ...F mg.4(I._ 00:22:29.514 000001b0 cd 8c 44 e2 44 12 b8 c0 08 26 a1 57 6c 8d 61 d9 ..D.D....&.Wl.a. 00:22:29.514 000001c0 b5 65 82 7f b7 d6 c3 55 96 98 a4 4e 2c 33 4e 89 .e.....U...N,3N. 00:22:29.514 000001d0 1f 35 c3 97 b5 b6 76 e2 21 8d e5 f6 24 1e aa 85 .5....v.!...$... 00:22:29.514 000001e0 30 7b de 91 0d a7 d9 11 a6 6d c5 a2 e3 2a 69 0f 0{.......m...*i. 00:22:29.514 000001f0 e4 1c ce 0a b6 cf 5c 02 92 d3 8a e8 60 7e e3 4d ......\.....`~.M 00:22:29.514 00000200 b7 5d 0b 2a ea a9 76 93 6c d5 dc 4b f9 98 77 40 .].*..v.l..K..w@ 00:22:29.514 00000210 8c 15 73 14 69 ab 50 d3 45 58 aa 78 4a 28 d4 b0 ..s.i.P.EX.xJ(.. 00:22:29.514 00000220 28 2f 33 00 38 74 cd 97 9c 9d 95 04 a9 a1 bd 5b (/3.8t.........[ 00:22:29.514 00000230 3d 27 06 c3 69 42 e2 63 33 d0 29 c9 d7 92 8e fa ='..iB.c3.)..... 00:22:29.514 00000240 12 9c 0c 9a 1b 86 f4 d5 ff a2 ee 72 a1 da b6 e6 ...........r.... 00:22:29.514 00000250 cc ee 2e d9 bf c1 67 8d 47 89 44 22 92 d8 3a b3 ......g.G.D"..:. 00:22:29.514 00000260 a3 2c ae 84 73 64 25 7e ba 68 41 c9 33 fa 89 74 .,..sd%~.hA.3..t 00:22:29.514 00000270 4c 43 d2 bd 50 fb 9b 16 57 0a ba 9a 0f 29 10 8a LC..P...W....).. 00:22:29.514 00000280 c6 b9 44 9e 06 ae 44 ca b0 c1 b9 ac de 5d bd 8c ..D...D......].. 00:22:29.514 00000290 42 0e bf 6b 29 41 63 dd ff bd 49 21 4a 40 e0 c6 B..k)Ac...I!J@.. 00:22:29.514 000002a0 2d f6 f1 e8 23 b6 37 66 52 19 ec 8c a8 64 81 0b -...#.7fR....d.. 00:22:29.514 000002b0 f1 77 b6 9d f4 23 ea da c8 0e f8 a7 51 19 34 fe .w...#......Q.4. 00:22:29.514 000002c0 ac 16 3f b4 ea 07 f7 89 52 08 da cc 76 16 4b a2 ..?.....R...v.K. 00:22:29.514 000002d0 b4 74 a8 59 8f 76 70 51 a3 de 8c 84 3e 98 c2 42 .t.Y.vpQ....>..B 00:22:29.514 000002e0 4f 18 2f 5d ff 2b fc 91 eb e6 46 37 b2 8f 9f dd O./].+....F7.... 00:22:29.514 000002f0 59 0c 78 96 fd b2 94 ad ee fb 1d f8 04 e4 99 17 Y.x............. 00:22:29.514 [2024-09-27 13:27:17.329837] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key4, hash=2, dhgroup=4, seq=3775755261, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.514 [2024-09-27 13:27:17.330284] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.514 [2024-09-27 13:27:17.382816] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.514 [2024-09-27 13:27:17.383253] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.514 [2024-09-27 13:27:17.383524] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.514 [2024-09-27 13:27:17.435420] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.514 [2024-09-27 13:27:17.435652] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.514 [2024-09-27 13:27:17.435895] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.514 [2024-09-27 13:27:17.436028] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.514 [2024-09-27 13:27:17.436396] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.514 ctrlr pubkey: 00:22:29.514 00000000 8c aa 8f 5c 90 67 ae 68 99 ae 4d c3 7f c0 fa 36 ...\.g.h..M....6 00:22:29.514 00000010 42 c4 b7 97 b5 75 68 15 be 1f 94 d9 c4 a8 ea 5f B....uh........_ 00:22:29.514 00000020 33 2a 20 58 45 f0 8b d1 b4 0f e1 8a eb fb 49 4d 3* XE.........IM 00:22:29.514 00000030 01 98 80 1f d8 61 76 e0 28 29 61 90 9f c7 f5 a6 .....av.()a..... 00:22:29.514 00000040 5e fc ff 19 ad a3 5e f4 dc 48 b3 65 14 51 6d 6b ^.....^..H.e.Qmk 00:22:29.514 00000050 a7 fa a6 20 08 8f f7 5b a0 24 e2 c3 ba 66 51 c1 ... ...[.$...fQ. 00:22:29.514 00000060 12 28 f8 39 d4 e0 4d a2 48 2a 6b 62 2d f8 51 e1 .(.9..M.H*kb-.Q. 00:22:29.514 00000070 50 02 2f 86 29 cc 8c 93 70 4d 07 47 4d 4a fa 5c P./.)...pM.GMJ.\ 00:22:29.514 00000080 bd 14 24 25 1a 59 e3 e7 10 c9 5d 8c d6 bd 9f 9f ..$%.Y....]..... 00:22:29.514 00000090 6a b6 16 0f 3e 33 d2 4d 36 f9 44 6c fe 40 3f 96 j...>3.M6.Dl.@?. 00:22:29.514 000000a0 a4 dd 61 ff 0c 6d fc ae 43 01 06 46 86 32 35 75 ..a..m..C..F.25u 00:22:29.514 000000b0 78 e0 cd b3 b9 0d e8 17 e9 82 bc f3 f5 c6 e0 17 x............... 00:22:29.514 000000c0 e7 b5 84 9a 75 3e 87 1d 24 62 c8 e1 b6 73 31 70 ....u>..$b...s1p 00:22:29.514 000000d0 27 90 fd 69 72 24 5d 5a fc ad a0 2e d6 c4 2c 7a '..ir$]Z......,z 00:22:29.514 000000e0 91 74 7a c1 98 1b 33 a5 aa 29 ec 96 c7 2d 04 08 .tz...3..)...-.. 00:22:29.514 000000f0 d8 ca e3 91 a5 d0 49 8c fd 44 96 4f 89 0b 8f ce ......I..D.O.... 00:22:29.514 00000100 11 b9 01 87 6a 9b b4 14 fb 29 89 5f cc e9 26 8c ....j....)._..&. 00:22:29.514 00000110 48 a6 9d 55 82 06 1a bd aa a0 8a dd 3d 1b 1c 74 H..U........=..t 00:22:29.514 00000120 ae 8c ad 54 ea 4a a7 68 68 90 15 f4 76 9b b1 1d ...T.J.hh...v... 00:22:29.514 00000130 42 87 5a e2 6c c4 8c 6c 2e 32 d5 95 dc 0b 9c b0 B.Z.l..l.2...... 00:22:29.514 00000140 88 3f 5c 37 6c 83 c0 a9 18 a0 11 54 6c a2 b0 46 .?\7l......Tl..F 00:22:29.514 00000150 c2 37 5b d2 4f ac c4 ab d0 c6 1f 3e 25 4d 8f 66 .7[.O......>%M.f 00:22:29.514 00000160 1f ee ac 24 fe d3 1c bf 89 d6 8d 18 db dd ee 8c ...$............ 00:22:29.514 00000170 ca 37 7b 2c c4 5e 8f aa 6e 76 98 5b ae b8 4d 98 .7{,.^..nv.[..M. 00:22:29.514 00000180 4c 23 2e cb 69 95 04 0e 9d 8a 8e 39 61 f6 d6 eb L#..i......9a... 00:22:29.514 00000190 4d c4 32 61 62 6d 59 97 d7 26 b6 08 05 35 b5 aa M.2abmY..&...5.. 00:22:29.514 000001a0 90 65 54 65 75 22 7d c5 5b 7b 16 f0 ab 30 9e 2e .eTeu"}.[{...0.. 00:22:29.514 000001b0 9e 17 1f 9f da 30 65 5b 61 15 a8 8b db c7 51 7b .....0e[a.....Q{ 00:22:29.514 000001c0 e3 ea 2b 2f fd 3e d1 c0 0b 9a 81 d6 2a 0a 79 ab ..+/.>......*.y. 00:22:29.514 000001d0 60 98 21 31 52 62 b0 10 f5 3c 58 36 a8 24 02 a1 `.!1Rb.......,... 00:22:29.514 000002f0 c6 eb 9b 55 44 4e d3 17 64 76 57 f3 f7 37 4e 23 ...UDN..dvW..7N# 00:22:29.514 host pubkey: 00:22:29.514 00000000 4a 9d d2 46 2f 4a 59 a5 8f d9 29 97 69 a7 6d 36 J..F/JY...).i.m6 00:22:29.514 00000010 03 34 24 79 54 a0 14 84 89 8c 24 87 fc ef 2a 34 .4$yT.....$...*4 00:22:29.514 00000020 3f 9f 74 4c f6 56 ac 3b a2 1c 4c 6f 9c 72 2a 16 ?.tL.V.;..Lo.r*. 00:22:29.514 00000030 63 47 9d 41 97 39 6d ef 80 07 ad d8 98 d4 bc 91 cG.A.9m......... 00:22:29.514 00000040 5c 87 f5 ce be 34 6a 21 bc 05 14 88 18 5c 31 c1 \....4j!.....\1. 00:22:29.514 00000050 4d ab b4 a0 dc 81 b3 55 fd 91 f7 f6 5d 36 2c 2a M......U....]6,* 00:22:29.514 00000060 fc 4c 8c d6 a7 1b 4e 44 c8 59 dc 10 ce ef 7c af .L....ND.Y....|. 00:22:29.514 00000070 70 13 15 4a 25 48 94 4e d2 0d 2b 4c 4c 8f 36 1f p..J%H.N..+LL.6. 00:22:29.514 00000080 b1 41 69 4c 9b fb d1 26 3d af cc 21 10 a2 3d bf .AiL...&=..!..=. 00:22:29.514 00000090 3c 2e 32 8f 4f 68 e5 08 70 1b 55 65 82 fb 87 88 <.2.Oh..p.Ue.... 00:22:29.514 000000a0 0f 5f b6 94 ba 54 1d 9f b0 72 c5 0b a9 50 f2 00 ._...T...r...P.. 00:22:29.514 000000b0 fe 48 eb d9 ea a6 d8 98 d5 b3 8e fc 22 b9 80 0f .H.........."... 00:22:29.514 000000c0 0e fe 70 59 f5 2c 5a 5d 77 f7 1d d4 3e b4 86 0d ..pY.,Z]w...>... 00:22:29.514 000000d0 28 8f 62 81 ed 12 2c 62 5b 55 bf db a0 51 f3 15 (.b...,b[U...Q.. 00:22:29.514 000000e0 14 f2 4f 49 02 9d 57 41 cc d5 25 c9 0d 13 0a 26 ..OI..WA..%....& 00:22:29.514 000000f0 d4 24 91 a7 be 59 0f 64 16 7e 5f 3f fe fe a9 67 .$...Y.d.~_?...g 00:22:29.514 00000100 4f 80 5e 3b a1 d4 ae 4b 72 41 58 75 8d b1 c8 75 O.^;...KrAXu...u 00:22:29.514 00000110 e0 34 1c 26 e8 f8 f1 c2 72 51 94 b1 ce 6d 88 46 .4.&....rQ...m.F 00:22:29.514 00000120 46 fd 65 46 f5 e3 47 9d cf 5b 35 b7 31 64 d7 6b F.eF..G..[5.1d.k 00:22:29.514 00000130 e8 1a d5 3b 7d d4 06 5d fa c4 04 cf b9 3a 2f 2b ...;}..].....:/+ 00:22:29.514 00000140 96 61 f4 87 f9 d4 28 1c d4 9c ec 75 e1 4c fa 99 .a....(....u.L.. 00:22:29.514 00000150 47 1b 33 e7 f4 e8 6a 5d 79 ff f7 0e f8 0d 09 66 G.3...j]y......f 00:22:29.514 00000160 34 aa 21 3d 73 0f 0e b4 ab 6d 00 09 0d fc c2 5e 4.!=s....m.....^ 00:22:29.514 00000170 86 81 a5 f7 b0 4c d9 12 ab 71 4a f3 bd 4a f9 93 .....L...qJ..J.. 00:22:29.514 00000180 ed fe 59 c6 28 57 b8 67 41 e7 43 89 90 cd 53 4d ..Y.(W.gA.C...SM 00:22:29.514 00000190 0f 68 d6 a4 7c dd 56 31 98 53 d4 74 38 a9 25 67 .h..|.V1.S.t8.%g 00:22:29.514 000001a0 05 c0 fa c3 e1 d8 66 ed 40 9b ba 4f 67 d9 da 25 ......f.@..Og..% 00:22:29.514 000001b0 33 01 7b 20 29 b1 65 07 32 14 52 7f d1 86 86 55 3.{ ).e.2.R....U 00:22:29.514 000001c0 45 d4 48 ed 2d 74 01 3d 6d a5 09 07 bc 38 d6 c1 E.H.-t.=m....8.. 00:22:29.514 000001d0 21 80 18 7d 5d 50 25 71 32 64 10 8a 01 bf ec ae !..}]P%q2d...... 00:22:29.514 000001e0 db 64 cc 25 fe 6a 36 df b9 3d 59 58 28 84 90 9b .d.%.j6..=YX(... 00:22:29.514 000001f0 d5 60 72 bf 4b 7d 21 c6 34 3e 55 d7 41 ec 0a be .`r.K}!.4>U.A... 00:22:29.514 00000200 9c a5 d6 9a c5 b4 0f d3 4c 8b ac 83 75 af 07 24 ........L...u..$ 00:22:29.514 00000210 ff fd 19 52 2a bf 68 39 d8 6a e0 d6 34 2c 85 92 ...R*.h9.j..4,.. 00:22:29.514 00000220 07 23 d0 79 2b 3d 49 ea 7a c3 93 5e 80 b0 0b 40 .#.y+=I.z..^...@ 00:22:29.514 00000230 fd c0 02 5a 2a 33 73 b7 1a 50 f9 4f e4 6e c1 7f ...Z*3s..P.O.n.. 00:22:29.514 00000240 e3 c9 e0 0a 5b 61 4c 63 8d 13 b5 17 04 c0 75 1a ....[aLc......u. 00:22:29.514 00000250 b3 8f 8c 9b ca d6 68 9c 9e d9 f2 2b 31 69 8f 2d ......h....+1i.- 00:22:29.514 00000260 3d 13 c8 5f 17 a5 9d 5d df 2c be 40 d4 15 00 f7 =.._...].,.@.... 00:22:29.514 00000270 14 89 5f c7 9e 6b d2 c0 0a c5 c2 5f 86 24 16 4c .._..k....._.$.L 00:22:29.514 00000280 7b d4 61 28 07 2a b0 9b 3a 19 76 a0 e6 34 a8 cd {.a(.*..:.v..4.. 00:22:29.514 00000290 d5 df a6 37 50 3a 55 45 54 c6 11 2b 8f 09 5b a9 ...7P:UET..+..[. 00:22:29.514 000002a0 5c 3e 37 1c 65 0e 2d 75 93 a8 c3 f4 13 4e c4 13 \>7.e.-u.....N.. 00:22:29.514 000002b0 da 7a 5e c2 3f 97 29 79 2f bb 8c 90 ec bc 97 fc .z^.?.)y/....... 00:22:29.514 000002c0 07 3b 41 1b b8 a4 2a 92 da f4 25 89 79 93 61 d1 .;A...*...%.y.a. 00:22:29.514 000002d0 5f 30 a2 68 cf e7 d2 f9 a6 35 b7 de 59 b1 f0 5a _0.h.....5..Y..Z 00:22:29.514 000002e0 96 2b 8c df 7f 4b d4 8f 6d 35 d7 4a af f7 3f 72 .+...K..m5.J..?r 00:22:29.514 000002f0 b0 17 bb 1d e6 be c9 4c d0 58 39 44 2f 86 db 8e .......L.X9D/... 00:22:29.514 dh secret: 00:22:29.514 00000000 1c 61 9e c0 aa 80 d1 80 6b 26 90 4f 21 f4 0f 05 .a......k&.O!... 00:22:29.515 00000010 f5 c0 47 49 9d 67 7b d1 f9 32 a4 05 9c 3c e5 3e ..GI.g{..2...<.> 00:22:29.515 00000020 54 04 d3 c6 15 7d 36 79 15 ab d6 bd b3 c8 06 fb T....}6y........ 00:22:29.515 00000030 36 84 b4 bc dd ce 6c bd c7 3b 48 e9 d5 7e 85 6a 6.....l..;H..~.j 00:22:29.515 00000040 06 fd 2b 69 d1 dd 1b 34 7c 66 56 3a 5d ff 59 e8 ..+i...4|fV:].Y. 00:22:29.515 00000050 10 de 9e 55 ee d4 ad 4b 86 d0 c9 95 ab 8f 5c c9 ...U...K......\. 00:22:29.515 00000060 5f bb 08 5e 3d 4d 74 e9 b2 f5 73 ba 74 d3 97 b4 _..^=Mt...s.t... 00:22:29.515 00000070 2f ae d2 f5 b2 be 2a 3e b2 0f c2 c8 85 25 28 fc /.....*>.....%(. 00:22:29.515 00000080 ec 84 61 70 7b c2 93 f6 4e 65 0c 8d 98 e7 3e 6a ..ap{...Ne....>j 00:22:29.515 00000090 11 83 b6 3a 5e 61 c4 ca 10 6b 0f 63 14 a3 44 d1 ...:^a...k.c..D. 00:22:29.515 000000a0 e7 b1 72 0a c0 0a f6 74 ab d5 31 b4 fd e8 01 f9 ..r....t..1..... 00:22:29.515 000000b0 ac 3f 6e c8 dd ce 45 ff 03 dd 52 f8 29 30 cc b8 .?n...E...R.)0.. 00:22:29.515 000000c0 51 3c 3f 1b d9 46 b8 0e 70 03 a9 48 df 69 1a 2b Q.T..&..T 00:22:29.515 00000190 3f 8b 59 77 7f cd be 74 c2 24 b5 a0 df d0 b5 c4 ?.Yw...t.$...... 00:22:29.515 000001a0 de 61 e5 8d ef 8b 1f 52 18 b5 bd a5 30 7f e0 0c .a.....R....0... 00:22:29.515 000001b0 e0 d6 f7 3b 90 15 41 3a 84 03 81 29 07 a2 b2 a5 ...;..A:...).... 00:22:29.515 000001c0 74 e6 e3 19 0a 38 85 1c ea 1a c5 f5 2f 22 7e cc t....8....../"~. 00:22:29.515 000001d0 e4 26 15 f4 ba 56 42 74 b4 56 b2 cc 9f 06 d2 f4 .&...VBt.V...... 00:22:29.515 000001e0 19 79 f8 02 23 db f3 2e 6e 31 f9 94 a8 ac 27 eb .y..#...n1....'. 00:22:29.515 000001f0 e5 0b 52 35 22 78 06 2b 2b 46 4c 5f d2 53 ef 89 ..R5"x.++FL_.S.. 00:22:29.515 00000200 6e 34 3a d9 5c de 7c 40 a4 4a 2e b9 6e 42 81 6c n4:.\.|@.J..nB.l 00:22:29.515 00000210 eb 98 c3 13 d5 ab 04 e1 05 26 cb b7 88 25 14 51 .........&...%.Q 00:22:29.515 00000220 bb 4b 64 23 43 43 5a b2 a9 7d c9 da d7 ed 4e 68 .Kd#CCZ..}....Nh 00:22:29.515 00000230 d9 62 23 62 75 88 63 21 ff 94 b0 41 b5 c9 db bb .b#bu.c!...A.... 00:22:29.515 00000240 d1 c7 6c a5 f2 c3 f5 38 14 f2 10 ad e4 e4 66 f6 ..l....8......f. 00:22:29.515 00000250 2a f9 28 45 b4 80 e0 71 65 a5 8c 68 ed 0f f4 55 *.(E...qe..h...U 00:22:29.515 00000260 56 7e f6 e2 28 e2 32 42 84 bf 63 a0 ec e4 35 4f V~..(.2B..c...5O 00:22:29.515 00000270 3e c5 e1 b5 cd f3 e8 05 67 18 39 41 09 a7 17 ea >.......g.9A.... 00:22:29.515 00000280 4c b9 9c 5f 26 31 8e b5 d5 70 7b b7 d7 b7 c3 eb L.._&1...p{..... 00:22:29.515 00000290 22 35 a3 37 84 07 aa 55 df 3c 92 28 55 45 bd 68 "5.7...U.<.(UE.h 00:22:29.515 000002a0 bf 1b 78 58 ee 32 10 71 0c ed 20 0c 75 13 ab 51 ..xX.2.q.. .u..Q 00:22:29.515 000002b0 f3 6f 76 da 1e 61 fd f3 47 4d 33 87 05 e0 bb 97 .ov..a..GM3..... 00:22:29.515 000002c0 5d 22 a4 63 33 98 a4 39 37 ed b3 92 3d 2e f9 ca ]".c3..97...=... 00:22:29.515 000002d0 ca 32 73 39 93 36 dd ad 86 ba 87 16 21 8a dd ae .2s9.6......!... 00:22:29.515 000002e0 6a b5 34 6b a3 d3 f1 0a c5 ce 4b cf ec 8f fa ad j.4k......K..... 00:22:29.515 000002f0 6c eb 8a 8c dc 54 b2 f6 d5 0c df bc f1 6f 06 ac l....T.......o.. 00:22:29.515 [2024-09-27 13:27:17.513190] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key4, hash=2, dhgroup=4, seq=3775755262, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.515 [2024-09-27 13:27:17.513484] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.515 [2024-09-27 13:27:17.570110] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.515 [2024-09-27 13:27:17.570467] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.515 [2024-09-27 13:27:17.570747] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.515 [2024-09-27 13:27:17.736305] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.515 [2024-09-27 13:27:17.736619] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.515 [2024-09-27 13:27:17.736820] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 5 (ffdhe8192) 00:22:29.515 [2024-09-27 13:27:17.737093] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.515 [2024-09-27 13:27:17.737330] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.515 ctrlr pubkey: 00:22:29.515 00000000 53 78 3e 56 58 ae cd 34 95 f2 11 95 7e 55 1f 0d Sx>VX..4....~U.. 00:22:29.515 00000010 fa 2d 22 43 6f 3a 15 ae 8d 66 28 4a e1 b9 af d6 .-"Co:...f(J.... 00:22:29.515 00000020 48 3f 62 ba 4c 42 39 d6 24 bd 4f 0e f9 47 de 97 H?b.LB9.$.O..G.. 00:22:29.515 00000030 33 65 06 c0 a5 f5 87 38 f2 8e 86 72 d6 97 53 01 3e.....8...r..S. 00:22:29.515 00000040 44 34 9b 36 28 ba 9c fa 09 1d 2f 5e f6 bb 6a 2e D4.6(...../^..j. 00:22:29.515 00000050 70 21 0a 5f f2 4e 19 31 3b 73 9a bb f1 dd 88 10 p!._.N.1;s...... 00:22:29.515 00000060 c0 cf 1b 96 cb ef 1c 14 2d 9a fc 6b 52 57 d2 52 ........-..kRW.R 00:22:29.515 00000070 ef 18 00 10 b0 d0 82 08 cc da c6 1d a5 db 4b ce ..............K. 00:22:29.515 00000080 e3 b7 d4 ef c3 e6 28 d6 59 35 57 d0 d6 96 7f d3 ......(.Y5W..... 00:22:29.515 00000090 a8 87 d4 2d c0 e4 6e 5c b8 23 b1 81 1d 73 51 ce ...-..n\.#...sQ. 00:22:29.515 000000a0 19 f8 92 e9 42 d5 2b 1f 98 05 7d 75 9f a6 b1 86 ....B.+...}u.... 00:22:29.515 000000b0 31 71 c1 a4 96 7e 3a 23 3e d1 91 03 8e 64 fc f5 1q...~:#>....d.. 00:22:29.515 000000c0 a3 3e 2a bf de 44 79 27 af 24 c1 99 ea ae b8 da .>*..Dy'.$...... 00:22:29.515 000000d0 56 a6 71 d4 8e 2c ad 99 2b 1b 51 66 f5 4c cd 91 V.q..,..+.Qf.L.. 00:22:29.515 000000e0 d1 ec 44 55 fc 4c 91 29 80 c0 e6 f8 06 77 2d c4 ..DU.L.).....w-. 00:22:29.515 000000f0 a6 95 ce de 97 53 38 4c c3 35 f6 5e 61 cf fd 7e .....S8L.5.^a..~ 00:22:29.515 00000100 6b 33 11 72 7c d4 af dc 5b 56 24 86 07 13 78 0c k3.r|...[V$...x. 00:22:29.515 00000110 93 9c 72 6b 3d 8c c6 16 bb 50 7c 78 77 e5 ff 24 ..rk=....P|xw..$ 00:22:29.515 00000120 68 f4 88 6d dc de a8 ea 05 70 f4 2d 1b e8 47 98 h..m.....p.-..G. 00:22:29.515 00000130 f9 9f 9b 7f 09 a5 79 c9 55 d0 69 c8 b8 e2 48 50 ......y.U.i...HP 00:22:29.515 00000140 05 09 55 fb 21 39 c6 19 6b 6e 63 29 04 59 61 27 ..U.!9..knc).Ya' 00:22:29.515 00000150 d6 d3 99 9f a6 dd b5 07 2a b0 72 ce 12 21 d7 6a ........*.r..!.j 00:22:29.515 00000160 73 c4 e3 bc a4 b1 a9 57 7e 31 de e8 c1 c2 77 35 s......W~1....w5 00:22:29.515 00000170 a4 a8 e4 78 a8 98 44 d9 6a a9 0e 9c 20 84 f8 a7 ...x..D.j... ... 00:22:29.515 00000180 04 f2 5a 36 f2 a6 da eb ed 7d 42 25 01 80 58 6e ..Z6.....}B%..Xn 00:22:29.515 00000190 7e 0d 30 31 d2 18 68 02 00 b9 64 01 4d 70 17 8c ~.01..h...d.Mp.. 00:22:29.515 000001a0 d5 4b 68 48 6c 19 b0 fc 29 93 78 53 51 c7 7c bc .KhHl...).xSQ.|. 00:22:29.515 000001b0 bb 6c 11 a2 96 67 22 31 65 6c f4 a1 b4 61 16 d7 .l...g"1el...a.. 00:22:29.515 000001c0 21 2a 6d be ba da eb 15 b8 9c 5e d6 ca 54 72 55 !*m.......^..TrU 00:22:29.515 000001d0 f4 e5 99 50 e4 16 c1 d3 39 3c 69 f7 08 a0 a9 41 ...P....9.a.M..8... 00:22:29.515 00000240 70 10 71 62 34 a1 05 d9 4c ad 39 d9 c1 23 71 20 p.qb4...L.9..#q 00:22:29.515 00000250 0a a8 2d bf c2 ed 8a e1 36 92 07 af 08 1a c3 b7 ..-.....6....... 00:22:29.515 00000260 da 30 28 f9 dd bc 22 21 6a d8 0b 9e de 4e 1d 97 .0(..."!j....N.. 00:22:29.515 00000270 19 74 65 e5 7a 3d 89 51 37 01 9f 49 3a 63 b0 f9 .te.z=.Q7..I:c.. 00:22:29.515 00000280 93 e7 4d cb d5 50 56 bd 82 b2 28 ce 8c 19 07 7f ..M..PV...(..... 00:22:29.515 00000290 c5 e3 0a 13 c3 4f ea b3 25 f6 f0 c8 6e b2 0f 1a .....O..%...n... 00:22:29.515 000002a0 8d c7 31 24 1b 47 5b e6 bb ab c4 32 e0 e7 e8 a9 ..1$.G[....2.... 00:22:29.515 000002b0 ba 44 5e d3 b8 e3 b8 6b d9 99 b4 13 45 91 13 21 .D^....k....E..! 00:22:29.515 000002c0 bd 4f 8e 1f a8 b7 59 73 90 ff c3 1d 6e 86 79 5f .O....Ys....n.y_ 00:22:29.515 000002d0 ae 30 b1 02 8d 85 c1 56 a6 de 76 f9 8c ff 81 61 .0.....V..v....a 00:22:29.515 000002e0 84 41 39 a7 fb 09 5d 2e 55 e7 3a c4 f6 ec b9 41 .A9...].U.:....A 00:22:29.515 000002f0 e9 1b 18 38 5b f1 c8 d5 36 17 44 82 4e f5 a4 a5 ...8[...6.D.N... 00:22:29.515 00000300 e2 08 34 29 0d b8 4b 8b fe 06 0c b3 a6 c0 04 b6 ..4)..K......... 00:22:29.515 00000310 25 de 63 06 3d af 73 da 1c 5f 38 19 48 da 16 ee %.c.=.s.._8.H... 00:22:29.515 00000320 66 de 40 30 b5 32 2c 00 58 03 96 72 33 1b 2b 25 f.@0.2,.X..r3.+% 00:22:29.515 00000330 50 29 62 44 8c 55 02 aa b4 b4 9f 90 23 07 8a 91 P)bD.U......#... 00:22:29.515 00000340 3e 67 89 d5 c8 56 e3 4c 5c 00 ac 7b bf 75 87 01 >g...V.L\..{.u.. 00:22:29.515 00000350 8e 35 d9 70 22 7f 18 65 9d a3 38 98 75 17 40 bf .5.p"..e..8.u.@. 00:22:29.515 00000360 d7 c6 3f 11 3f 45 8b a2 eb bb 28 8b 9e 82 fb 90 ..?.?E....(..... 00:22:29.515 00000370 52 56 f0 d8 ab a7 64 78 00 c2 49 a2 d5 f5 0b bb RV....dx..I..... 00:22:29.515 00000380 28 c9 28 cc de 97 1c 13 1b 5e bd 36 4d b3 4a f0 (.(......^.6M.J. 00:22:29.515 00000390 15 d7 93 83 d8 ff 9c 5e 60 ef d3 e4 0d 3a aa e8 .......^`....:.. 00:22:29.515 000003a0 11 ae c1 d4 1f e3 2b e3 7e 26 81 3c 67 d3 88 1e ......+.~&.W. 00:22:29.515 000003c0 18 01 8e bb a3 be d0 83 99 2c c2 10 4e 12 89 c7 .........,..N... 00:22:29.515 000003d0 2e 4b 4e 29 fe 3f 09 81 a9 b1 1f 6f f3 2e 6d d4 .KN).?.....o..m. 00:22:29.515 000003e0 82 fd 9a 1d ad 2a 07 e9 ad 81 03 43 0e d5 98 18 .....*.....C.... 00:22:29.515 000003f0 54 51 77 8d 6d 91 ef 74 b6 8e 81 3b 4c 42 fb 01 TQw.m..t...;LB.. 00:22:29.515 host pubkey: 00:22:29.515 00000000 ac 50 f6 85 b2 bd 65 cb f2 79 45 53 32 35 28 35 .P....e..yES25(5 00:22:29.516 00000010 5c 6c 5e 85 70 b7 e9 2f 74 77 dd 1a 27 23 94 58 \l^.p../tw..'#.X 00:22:29.516 00000020 51 f2 0b 3a 46 df 5f 4d aa a6 3a 64 c5 8d f1 e8 Q..:F._M..:d.... 00:22:29.516 00000030 21 77 88 70 f3 08 4b 52 99 2d dc e1 72 0e e8 78 !w.p..KR.-..r..x 00:22:29.516 00000040 33 f7 71 98 be 24 85 67 16 a6 91 c1 8d 73 cf d3 3.q..$.g.....s.. 00:22:29.516 00000050 bd ef d4 d7 78 8b 4f ee 84 52 c5 1e 54 ae a2 1f ....x.O..R..T... 00:22:29.516 00000060 7e ca f2 b1 87 b7 98 aa 15 9e d8 e6 9c 26 9b f8 ~............&.. 00:22:29.516 00000070 03 5d aa 46 c4 0f 9d 75 e3 df 82 44 ce 31 a6 4e .].F...u...D.1.N 00:22:29.516 00000080 4f f0 01 b1 a7 19 2d 88 5b 34 2c 06 ab 79 74 d9 O.....-.[4,..yt. 00:22:29.516 00000090 3b 4d 35 a8 5d 9a 5f 5a f7 9f ea e8 bf f2 77 c5 ;M5.]._Z......w. 00:22:29.516 000000a0 86 61 ce 81 f3 88 0f 84 c1 c1 3d 55 e8 fd 91 a6 .a........=U.... 00:22:29.516 000000b0 04 f5 a1 b3 17 4d c7 2b 29 d5 8a 4f 26 e8 ca 30 .....M.+)..O&..0 00:22:29.516 000000c0 99 f1 30 c6 0a 5f a7 5e f0 2e 9c 5f 3e 24 c5 8a ..0.._.^..._>$.. 00:22:29.516 000000d0 81 12 2c b2 f4 e8 16 c2 17 b0 51 e4 06 2f a2 0a ..,.......Q../.. 00:22:29.516 000000e0 c6 8e 18 e1 f6 67 97 36 de 3a ee 93 a3 16 05 18 .....g.6.:...... 00:22:29.516 000000f0 8a 32 e2 b9 6a ea 97 84 1b c3 82 e8 af 67 6b d4 .2..j........gk. 00:22:29.516 00000100 46 d2 11 e2 83 11 0a 50 e5 03 6b 81 49 1b 1d 3a F......P..k.I..: 00:22:29.516 00000110 f1 8d b6 ae 42 3a 84 51 d9 79 47 47 74 2f da c6 ....B:.Q.yGGt/.. 00:22:29.516 00000120 73 d5 88 42 cc d6 f8 46 fe 49 b8 25 41 9b bd 5f s..B...F.I.%A.._ 00:22:29.516 00000130 6c cb e1 76 a1 c4 b5 bf 16 fa cb c5 d0 c4 58 18 l..v..........X. 00:22:29.516 00000140 74 87 c1 6c 25 23 ff 2c 79 7b dc 4d 2c d3 ef 01 t..l%#.,y{.M,... 00:22:29.516 00000150 48 98 b7 f8 78 d9 28 3d 55 6d 96 d8 2c 6c e8 3e H...x.(=Um..,l.> 00:22:29.516 00000160 d1 02 da e7 9f 35 41 21 c6 82 04 46 d0 e4 83 c0 .....5A!...F.... 00:22:29.516 00000170 a9 64 26 9f 8a eb f1 3a 8e a0 90 31 ed d5 69 b6 .d&....:...1..i. 00:22:29.516 00000180 a0 0c 35 d7 63 f8 05 9d 85 ca c0 cc bc 7c 61 f6 ..5.c........|a. 00:22:29.516 00000190 78 77 77 99 2c 9b 80 dd 5f b2 ba 00 1f 42 06 68 xww.,..._....B.h 00:22:29.516 000001a0 2d 1a 8f 59 cc 08 6a 74 d1 1a c3 c9 b8 04 41 5a -..Y..jt......AZ 00:22:29.516 000001b0 c3 84 b3 a2 87 bd 35 af 89 2a fc 53 cd f9 d9 12 ......5..*.S.... 00:22:29.516 000001c0 87 9a b3 82 13 82 de be 53 4c 87 e6 d6 06 1a f8 ........SL...... 00:22:29.516 000001d0 1d 87 8e d1 f0 45 18 c5 be fd 18 5f ae 7f 08 ec .....E....._.... 00:22:29.516 000001e0 3a c2 af e8 cf c3 e4 63 82 5a 84 a5 23 42 ab 04 :......c.Z..#B.. 00:22:29.516 000001f0 de 2f c4 88 88 1b 63 a2 13 15 92 52 0f 28 53 94 ./....c....R.(S. 00:22:29.516 00000200 63 5c 0f a7 e4 6b 14 1a 14 50 43 f4 ba ef 1d f0 c\...k...PC..... 00:22:29.516 00000210 57 39 3a 76 1f e3 59 3c 63 34 11 ca a2 37 03 05 W9:v..Y\.WO 00:22:29.516 00000060 93 82 c9 3f 10 9f 86 16 be f4 3c d3 fe e4 8e ad ...?......<..... 00:22:29.516 00000070 18 11 ec 04 d1 4c a2 21 71 7a eb 26 b8 de 26 3e .....L.!qz.&..&> 00:22:29.516 00000080 d3 57 bf 9c 92 e8 69 66 8f ff d2 36 b1 a6 ed ba .W....if...6.... 00:22:29.516 00000090 8b 61 26 6d 55 0e f3 75 cb d7 e8 78 83 ac 3e 78 .a&mU..u...x..>x 00:22:29.516 000000a0 e3 be 2d e5 91 fa fe ee 5b d6 06 a5 26 da 51 ce ..-.....[...&.Q. 00:22:29.516 000000b0 4d 47 7d 9c c5 bc a8 49 9a 1a 6b f9 2c e6 73 10 MG}....I..k.,.s. 00:22:29.516 000000c0 6b d6 87 18 80 ec 2f 33 d6 43 28 bd 6c 67 ed 5f k...../3.C(.lg._ 00:22:29.516 000000d0 3b 6c 1f 0a b6 34 22 6e a6 76 20 63 14 d9 e1 9c ;l...4"n.v c.... 00:22:29.516 000000e0 f1 03 b1 62 f5 a6 59 7f 6e a3 9d 0c 7a 0e 4a ff ...b..Y.n...z.J. 00:22:29.516 000000f0 e7 91 03 b7 86 c2 0c bd e2 e0 5c 33 0f 26 eb d9 ..........\3.&.. 00:22:29.516 00000100 b0 2a be bc 58 7e b0 f8 b7 01 f4 c7 e2 96 88 15 .*..X~.......... 00:22:29.516 00000110 4a cb 77 22 29 1f 05 bb dc 2f e1 a3 7e fe 3e 4e J.w")..../..~.>N 00:22:29.516 00000120 45 55 17 31 5f cf 55 c2 95 31 47 2a 25 ca 36 5d EU.1_.U..1G*%.6] 00:22:29.516 00000130 a9 9f a5 1c fb 84 d4 97 ac 97 a3 56 3a 33 14 35 ...........V:3.5 00:22:29.516 00000140 e0 a0 2f 09 0b 60 61 d1 f6 c5 ce dd fb 54 1b e2 ../..`a......T.. 00:22:29.516 00000150 e8 c0 76 c5 4d e7 df 0b d3 89 6b 73 be 53 2e 69 ..v.M.....ks.S.i 00:22:29.516 00000160 34 b4 07 1d 48 e9 6e 42 18 4c 95 31 a2 43 2f f4 4...H.nB.L.1.C/. 00:22:29.516 00000170 11 62 17 d4 0d 96 03 e6 a3 44 46 9a c6 15 2e 21 .b.......DF....! 00:22:29.516 00000180 f1 6a fa 95 dd f7 0a db 86 0e 8c df ec 3d 00 ed .j...........=.. 00:22:29.516 00000190 ca 87 c5 81 d8 a9 ab 8f b8 74 3c 0e 4e 72 ab a2 .........t<.Nr.. 00:22:29.516 000001a0 ed 3b 55 10 e0 38 a2 a4 7c f8 0c ee 6d 8f 45 d8 .;U..8..|...m.E. 00:22:29.516 000001b0 f0 35 ba df 33 30 dd 4e 3f a3 db ba 03 db da 03 .5..30.N?....... 00:22:29.516 000001c0 23 63 fe 2c a8 a9 ff ea 47 91 0f 04 39 15 63 b1 #c.,....G...9.c. 00:22:29.516 000001d0 81 cd 4d e2 9c cb 55 1b 4f 2a ae 3c a5 e8 42 b4 ..M...U.O*.<..B. 00:22:29.516 000001e0 33 a9 65 ec 2b 8d ea 8a 8d 7b 01 09 98 ce c9 82 3.e.+....{...... 00:22:29.516 000001f0 c1 b9 6c da 08 96 6d 50 a2 fd ce cd bf 6d 90 f6 ..l...mP.....m.. 00:22:29.516 00000200 73 1b 09 c9 82 d9 40 32 d4 30 ad a2 df 1c 5b 20 s.....@2.0....[ 00:22:29.516 00000210 2e 2e cb 53 46 40 d2 d4 a5 c1 6b a5 ad 37 68 0c ...SF@....k..7h. 00:22:29.516 00000220 0b 20 a6 c6 9d c0 ec 62 fb 6b 0c 38 ee 4c ee 95 . .....b.k.8.L.. 00:22:29.516 00000230 77 c1 93 2d 8b 94 be 03 a7 fe 78 f7 24 23 b8 9d w..-......x.$#.. 00:22:29.516 00000240 a0 75 b3 87 d3 65 00 fa 87 c5 94 1d 6b f8 50 a0 .u...e......k.P. 00:22:29.516 00000250 8a 7f f7 a2 d3 14 04 5c a0 47 d4 3b c1 62 db 42 .......\.G.;.b.B 00:22:29.516 00000260 81 8c 75 12 61 ec 7a c8 a3 52 33 fe 6d 8b 70 9f ..u.a.z..R3.m.p. 00:22:29.516 00000270 08 66 8f f1 71 89 12 0a 1e c5 9a 30 39 b3 f3 3b .f..q......09..; 00:22:29.516 00000280 bb 7f 6c 20 0d 08 ce 9e 09 0d da ee f9 6d a5 b9 ..l .........m.. 00:22:29.516 00000290 f7 05 9c dc 7a 15 50 1e 76 b9 fb 50 25 e3 1d 46 ....z.P.v..P%..F 00:22:29.516 000002a0 b9 b5 f6 93 2a 97 d4 34 77 1f ef 1a e4 1b ae e4 ....*..4w....... 00:22:29.516 000002b0 67 42 b2 31 81 a2 9b 7e dc 93 db 03 e1 1e 1c 3b gB.1...~.......; 00:22:29.516 000002c0 f3 90 0a 89 5e 5f cb fa d4 1d 53 3c 9d 5a 9d 2a ....^_....S<.Z.* 00:22:29.516 000002d0 31 0a 9d 0b bd ea b6 85 84 5f 8f 53 1a 27 8b b8 1........_.S.'.. 00:22:29.516 000002e0 cb aa 27 70 02 46 c2 08 9e ba 48 05 8b e6 83 99 ..'p.F....H..... 00:22:29.516 000002f0 96 5f c4 57 93 9d 78 1a 87 d6 c2 7f a0 14 b5 8d ._.W..x......... 00:22:29.516 00000300 d2 be 13 9c 2b f0 88 38 3f 19 61 2b 29 fc 75 ca ....+..8?.a+).u. 00:22:29.516 00000310 a6 2f 14 30 63 29 39 3a f2 0d 9a 1f 5c 7a 01 2d ./.0c)9:....\z.- 00:22:29.516 00000320 25 ab a6 eb 82 75 78 be 3d 82 e2 6d 4f 07 73 92 %....ux.=..mO.s. 00:22:29.516 00000330 4e 2e 94 96 63 f8 aa f2 61 15 60 62 25 a7 1b 71 N...c...a.`b%..q 00:22:29.516 00000340 cd cf c1 d5 f3 63 ce f7 1a df 88 09 32 02 c9 3f .....c......2..? 00:22:29.516 00000350 81 6e 87 32 73 20 68 c8 cc da 31 47 72 7a 70 90 .n.2s h...1Grzp. 00:22:29.516 00000360 bc 7d 48 f2 bd 7b fd e3 be 87 56 f1 43 f6 24 25 .}H..{....V.C.$% 00:22:29.516 00000370 14 6a 83 5d 75 82 6c 10 5d 6c 67 12 fa 0e 95 ce .j.]u.l.]lg..... 00:22:29.516 00000380 00 8f 2b 59 88 e7 31 d0 2c 3b aa 11 d8 12 e9 8b ..+Y..1.,;...... 00:22:29.516 00000390 5f 8a ac 01 56 29 84 57 b0 26 a6 a6 3d b9 bf 60 _...V).W.&..=..` 00:22:29.516 000003a0 6f 59 59 45 02 1d 35 37 53 1b b4 a7 17 9c 3b f1 oYYE..57S.....;. 00:22:29.516 000003b0 82 0d ad 02 ad b4 34 14 86 a5 0f de fd ab 78 2d ......4.......x- 00:22:29.516 000003c0 c1 ca b0 79 35 21 16 cc 36 06 18 06 52 f6 06 00 ...y5!..6...R... 00:22:29.516 000003d0 5f fe 91 ee 2e 6a e6 57 ef 84 38 22 38 c1 64 ff _....j.W..8"8.d. 00:22:29.516 000003e0 a2 16 35 06 ca c7 b3 1e c3 b6 fd 54 5a 01 ea aa ..5........TZ... 00:22:29.516 000003f0 c2 97 6d b0 3f 0b b9 06 a9 3a e1 8c ae e1 40 0a ..m.?....:....@. 00:22:29.516 [2024-09-27 13:27:17.908163] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key0, hash=2, dhgroup=5, seq=3775755263, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.516 [2024-09-27 13:27:17.908589] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.517 [2024-09-27 13:27:18.009909] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.517 [2024-09-27 13:27:18.010407] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.517 [2024-09-27 13:27:18.010539] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.517 [2024-09-27 13:27:18.010953] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.517 [2024-09-27 13:27:18.063194] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.517 [2024-09-27 13:27:18.063428] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.517 [2024-09-27 13:27:18.063756] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 5 (ffdhe8192) 00:22:29.517 [2024-09-27 13:27:18.063891] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.517 [2024-09-27 13:27:18.064233] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.517 ctrlr pubkey: 00:22:29.517 00000000 53 78 3e 56 58 ae cd 34 95 f2 11 95 7e 55 1f 0d Sx>VX..4....~U.. 00:22:29.517 00000010 fa 2d 22 43 6f 3a 15 ae 8d 66 28 4a e1 b9 af d6 .-"Co:...f(J.... 00:22:29.517 00000020 48 3f 62 ba 4c 42 39 d6 24 bd 4f 0e f9 47 de 97 H?b.LB9.$.O..G.. 00:22:29.517 00000030 33 65 06 c0 a5 f5 87 38 f2 8e 86 72 d6 97 53 01 3e.....8...r..S. 00:22:29.517 00000040 44 34 9b 36 28 ba 9c fa 09 1d 2f 5e f6 bb 6a 2e D4.6(...../^..j. 00:22:29.517 00000050 70 21 0a 5f f2 4e 19 31 3b 73 9a bb f1 dd 88 10 p!._.N.1;s...... 00:22:29.517 00000060 c0 cf 1b 96 cb ef 1c 14 2d 9a fc 6b 52 57 d2 52 ........-..kRW.R 00:22:29.517 00000070 ef 18 00 10 b0 d0 82 08 cc da c6 1d a5 db 4b ce ..............K. 00:22:29.517 00000080 e3 b7 d4 ef c3 e6 28 d6 59 35 57 d0 d6 96 7f d3 ......(.Y5W..... 00:22:29.517 00000090 a8 87 d4 2d c0 e4 6e 5c b8 23 b1 81 1d 73 51 ce ...-..n\.#...sQ. 00:22:29.517 000000a0 19 f8 92 e9 42 d5 2b 1f 98 05 7d 75 9f a6 b1 86 ....B.+...}u.... 00:22:29.517 000000b0 31 71 c1 a4 96 7e 3a 23 3e d1 91 03 8e 64 fc f5 1q...~:#>....d.. 00:22:29.517 000000c0 a3 3e 2a bf de 44 79 27 af 24 c1 99 ea ae b8 da .>*..Dy'.$...... 00:22:29.517 000000d0 56 a6 71 d4 8e 2c ad 99 2b 1b 51 66 f5 4c cd 91 V.q..,..+.Qf.L.. 00:22:29.517 000000e0 d1 ec 44 55 fc 4c 91 29 80 c0 e6 f8 06 77 2d c4 ..DU.L.).....w-. 00:22:29.517 000000f0 a6 95 ce de 97 53 38 4c c3 35 f6 5e 61 cf fd 7e .....S8L.5.^a..~ 00:22:29.517 00000100 6b 33 11 72 7c d4 af dc 5b 56 24 86 07 13 78 0c k3.r|...[V$...x. 00:22:29.517 00000110 93 9c 72 6b 3d 8c c6 16 bb 50 7c 78 77 e5 ff 24 ..rk=....P|xw..$ 00:22:29.517 00000120 68 f4 88 6d dc de a8 ea 05 70 f4 2d 1b e8 47 98 h..m.....p.-..G. 00:22:29.517 00000130 f9 9f 9b 7f 09 a5 79 c9 55 d0 69 c8 b8 e2 48 50 ......y.U.i...HP 00:22:29.517 00000140 05 09 55 fb 21 39 c6 19 6b 6e 63 29 04 59 61 27 ..U.!9..knc).Ya' 00:22:29.517 00000150 d6 d3 99 9f a6 dd b5 07 2a b0 72 ce 12 21 d7 6a ........*.r..!.j 00:22:29.517 00000160 73 c4 e3 bc a4 b1 a9 57 7e 31 de e8 c1 c2 77 35 s......W~1....w5 00:22:29.517 00000170 a4 a8 e4 78 a8 98 44 d9 6a a9 0e 9c 20 84 f8 a7 ...x..D.j... ... 00:22:29.517 00000180 04 f2 5a 36 f2 a6 da eb ed 7d 42 25 01 80 58 6e ..Z6.....}B%..Xn 00:22:29.517 00000190 7e 0d 30 31 d2 18 68 02 00 b9 64 01 4d 70 17 8c ~.01..h...d.Mp.. 00:22:29.517 000001a0 d5 4b 68 48 6c 19 b0 fc 29 93 78 53 51 c7 7c bc .KhHl...).xSQ.|. 00:22:29.517 000001b0 bb 6c 11 a2 96 67 22 31 65 6c f4 a1 b4 61 16 d7 .l...g"1el...a.. 00:22:29.517 000001c0 21 2a 6d be ba da eb 15 b8 9c 5e d6 ca 54 72 55 !*m.......^..TrU 00:22:29.517 000001d0 f4 e5 99 50 e4 16 c1 d3 39 3c 69 f7 08 a0 a9 41 ...P....9.a.M..8... 00:22:29.517 00000240 70 10 71 62 34 a1 05 d9 4c ad 39 d9 c1 23 71 20 p.qb4...L.9..#q 00:22:29.517 00000250 0a a8 2d bf c2 ed 8a e1 36 92 07 af 08 1a c3 b7 ..-.....6....... 00:22:29.517 00000260 da 30 28 f9 dd bc 22 21 6a d8 0b 9e de 4e 1d 97 .0(..."!j....N.. 00:22:29.517 00000270 19 74 65 e5 7a 3d 89 51 37 01 9f 49 3a 63 b0 f9 .te.z=.Q7..I:c.. 00:22:29.517 00000280 93 e7 4d cb d5 50 56 bd 82 b2 28 ce 8c 19 07 7f ..M..PV...(..... 00:22:29.517 00000290 c5 e3 0a 13 c3 4f ea b3 25 f6 f0 c8 6e b2 0f 1a .....O..%...n... 00:22:29.517 000002a0 8d c7 31 24 1b 47 5b e6 bb ab c4 32 e0 e7 e8 a9 ..1$.G[....2.... 00:22:29.517 000002b0 ba 44 5e d3 b8 e3 b8 6b d9 99 b4 13 45 91 13 21 .D^....k....E..! 00:22:29.517 000002c0 bd 4f 8e 1f a8 b7 59 73 90 ff c3 1d 6e 86 79 5f .O....Ys....n.y_ 00:22:29.517 000002d0 ae 30 b1 02 8d 85 c1 56 a6 de 76 f9 8c ff 81 61 .0.....V..v....a 00:22:29.517 000002e0 84 41 39 a7 fb 09 5d 2e 55 e7 3a c4 f6 ec b9 41 .A9...].U.:....A 00:22:29.517 000002f0 e9 1b 18 38 5b f1 c8 d5 36 17 44 82 4e f5 a4 a5 ...8[...6.D.N... 00:22:29.517 00000300 e2 08 34 29 0d b8 4b 8b fe 06 0c b3 a6 c0 04 b6 ..4)..K......... 00:22:29.517 00000310 25 de 63 06 3d af 73 da 1c 5f 38 19 48 da 16 ee %.c.=.s.._8.H... 00:22:29.517 00000320 66 de 40 30 b5 32 2c 00 58 03 96 72 33 1b 2b 25 f.@0.2,.X..r3.+% 00:22:29.517 00000330 50 29 62 44 8c 55 02 aa b4 b4 9f 90 23 07 8a 91 P)bD.U......#... 00:22:29.517 00000340 3e 67 89 d5 c8 56 e3 4c 5c 00 ac 7b bf 75 87 01 >g...V.L\..{.u.. 00:22:29.517 00000350 8e 35 d9 70 22 7f 18 65 9d a3 38 98 75 17 40 bf .5.p"..e..8.u.@. 00:22:29.517 00000360 d7 c6 3f 11 3f 45 8b a2 eb bb 28 8b 9e 82 fb 90 ..?.?E....(..... 00:22:29.517 00000370 52 56 f0 d8 ab a7 64 78 00 c2 49 a2 d5 f5 0b bb RV....dx..I..... 00:22:29.517 00000380 28 c9 28 cc de 97 1c 13 1b 5e bd 36 4d b3 4a f0 (.(......^.6M.J. 00:22:29.517 00000390 15 d7 93 83 d8 ff 9c 5e 60 ef d3 e4 0d 3a aa e8 .......^`....:.. 00:22:29.517 000003a0 11 ae c1 d4 1f e3 2b e3 7e 26 81 3c 67 d3 88 1e ......+.~&.W. 00:22:29.517 000003c0 18 01 8e bb a3 be d0 83 99 2c c2 10 4e 12 89 c7 .........,..N... 00:22:29.517 000003d0 2e 4b 4e 29 fe 3f 09 81 a9 b1 1f 6f f3 2e 6d d4 .KN).?.....o..m. 00:22:29.517 000003e0 82 fd 9a 1d ad 2a 07 e9 ad 81 03 43 0e d5 98 18 .....*.....C.... 00:22:29.517 000003f0 54 51 77 8d 6d 91 ef 74 b6 8e 81 3b 4c 42 fb 01 TQw.m..t...;LB.. 00:22:29.517 host pubkey: 00:22:29.517 00000000 9c df 9e 17 e1 f7 d0 f3 e3 d5 44 75 36 b8 5e dc ..........Du6.^. 00:22:29.517 00000010 2c e3 8e 0c 3a 87 89 1a b8 a9 cd 49 97 e6 c4 45 ,...:......I...E 00:22:29.517 00000020 88 d5 90 1b ba af 46 e2 1c c6 27 f1 6c 0b ab 32 ......F...'.l..2 00:22:29.517 00000030 b6 36 c0 16 a9 e3 c8 50 77 b8 df a3 01 88 c2 fb .6.....Pw....... 00:22:29.517 00000040 d1 fc 23 f1 8c f9 61 f4 0a 3b a7 04 03 3b 68 8a ..#...a..;...;h. 00:22:29.517 00000050 9c e4 5d ed 7e 70 11 78 64 8c 66 62 f3 20 22 01 ..].~p.xd.fb. ". 00:22:29.517 00000060 9e 4d eb da 7a 42 ae 99 af 22 48 6b ef 23 b8 b5 .M..zB..."Hk.#.. 00:22:29.517 00000070 14 c3 a9 86 8e a2 d8 9a 2a a6 cd 2b 16 50 69 ac ........*..+.Pi. 00:22:29.517 00000080 67 4b ac d6 f3 a3 b9 d9 dd 70 f7 3d b7 2a ae 6a gK.......p.=.*.j 00:22:29.517 00000090 a1 fe e4 6d c6 ed 7a 39 74 74 50 74 00 0c df d8 ...m..z9ttPt.... 00:22:29.517 000000a0 34 f0 8a 4c 73 9b b4 66 71 a7 6f f1 7d 07 c2 1f 4..Ls..fq.o.}... 00:22:29.517 000000b0 49 4c b6 f6 0f a4 9d 24 ee 4c 45 0e 6d c6 89 72 IL.....$.LE.m..r 00:22:29.517 000000c0 5c f6 fb 91 50 fd 2b d2 8d d6 62 e9 d6 cb 66 5b \...P.+...b...f[ 00:22:29.517 000000d0 6d e4 d5 18 95 25 51 58 4b d6 db 11 29 70 1b 61 m....%QXK...)p.a 00:22:29.517 000000e0 2f e8 f1 7a e4 9c 67 26 7e b1 36 24 47 86 98 94 /..z..g&~.6$G... 00:22:29.517 000000f0 dd 98 28 2e fb ca 4c 2a 48 11 c9 5d fb 97 17 c9 ..(...L*H..].... 00:22:29.517 00000100 63 26 43 15 6b aa cc c7 31 cf 08 27 82 11 ca 52 c&C.k...1..'...R 00:22:29.517 00000110 44 19 3e a1 b8 78 b5 39 0b b4 d9 f9 c4 fb 3a 26 D.>..x.9......:& 00:22:29.517 00000120 77 3c 6d 31 2c 75 65 10 41 47 d4 99 e5 e1 d6 00 wL..]m.. 00:22:29.517 00000150 02 f8 0e 2e 0d 98 29 a0 27 31 f3 54 67 ac 0e 9e ......).'1.Tg... 00:22:29.517 00000160 77 f1 70 e6 ab 84 8c 77 4f 40 f8 87 b1 ae 9a 30 w.p....wO@.....0 00:22:29.517 00000170 df 8a e7 4c e5 b7 20 0a 0d bf 4e 46 61 84 78 d5 ...L.. ...NFa.x. 00:22:29.517 00000180 a6 1b 58 cf 1a 0e d0 92 1a 3b 47 eb 7d f9 32 46 ..X......;G.}.2F 00:22:29.517 00000190 1e 1f 82 ef e2 50 7a 7f 2e c4 4b 31 d3 59 fa 23 .....Pz...K1.Y.# 00:22:29.517 000001a0 73 95 f5 3b 7b 0f 62 8d e1 28 3a ba b6 a7 10 01 s..;{.b..(:..... 00:22:29.517 000001b0 c7 ad 4d 87 42 2e 4e 8d d5 63 ce a2 70 99 68 e8 ..M.B.N..c..p.h. 00:22:29.517 000001c0 d5 40 57 79 e5 f3 eb c2 64 b7 39 d7 6d 73 82 9b .@Wy....d.9.ms.. 00:22:29.517 000001d0 54 28 c9 9e 21 87 80 11 f8 ad 11 63 4b 8f bf f2 T(..!......cK... 00:22:29.517 000001e0 f0 7e 5f b2 3b 28 3b 62 c3 06 67 f6 a5 83 48 7a .~_.;(;b..g...Hz 00:22:29.517 000001f0 57 c9 98 92 f4 a1 73 56 3b ba 47 6c 1d 14 95 d5 W.....sV;.Gl.... 00:22:29.517 00000200 79 ef c7 29 86 17 af 69 c2 28 58 62 f0 80 96 bc y..)...i.(Xb.... 00:22:29.517 00000210 d8 33 63 92 37 7d 10 84 ce 8e 37 02 ec 4e e3 bb .3c.7}....7..N.. 00:22:29.517 00000220 21 55 7c 6d 19 25 65 cc b7 86 7d 54 39 16 c7 02 !U|m.%e...}T9... 00:22:29.517 00000230 f3 a1 24 ee 8b 3d 55 89 14 bd 1d 3b f4 10 7f a1 ..$..=U....;.... 00:22:29.517 00000240 9b e5 c1 5f b7 48 d0 56 a7 60 d5 8a 43 2e 93 1b ..._.H.V.`..C... 00:22:29.517 00000250 16 6a 2d 1f 18 ee 5d d1 5c f7 59 08 a1 d0 bf 66 .j-...].\.Y....f 00:22:29.517 00000260 fc a6 73 11 5e bd 70 14 83 72 7f f8 2c 51 d0 f0 ..s.^.p..r..,Q.. 00:22:29.517 00000270 04 81 d0 01 09 9a 8d e4 d0 03 54 8d fa 29 a2 da ..........T..).. 00:22:29.517 00000280 1c ee ae 4d 67 2e 38 a0 df 49 41 2d ae b1 b0 b0 ...Mg.8..IA-.... 00:22:29.517 00000290 26 ae 49 67 98 7a 05 5a 21 e0 b8 b0 d4 c9 8c 78 &.Ig.z.Z!......x 00:22:29.517 000002a0 08 3e f2 3b 49 c7 2e 4b ba d0 7b 7f 39 61 5d b9 .>.;I..K..{.9a]. 00:22:29.517 000002b0 a4 91 eb c6 2f a9 45 38 1d 9f ab 77 1d 4a 63 62 ..../.E8...w.Jcb 00:22:29.517 000002c0 81 fb 66 39 ae 0d 78 e3 96 5d 0c ec bb 05 ef 1b ..f9..x..]...... 00:22:29.517 000002d0 15 b6 ff 11 57 8a d8 2e f9 8d 8a 8c 40 2e 3f 9a ....W.......@.?. 00:22:29.517 000002e0 56 c6 1a 5e 14 06 6f ed d8 4a 05 12 b8 20 3a d0 V..^..o..J... :. 00:22:29.517 000002f0 9d b8 c5 e1 38 06 76 f5 b4 34 78 53 26 2b fe 18 ....8.v..4xS&+.. 00:22:29.517 00000300 10 a7 66 01 9a a3 b7 60 56 1c 9d 26 e9 f2 8b b4 ..f....`V..&.... 00:22:29.517 00000310 7e 33 83 4b 1a bd 08 06 49 b9 e4 36 e3 fd 60 7a ~3.K....I..6..`z 00:22:29.517 00000320 be a3 70 c0 11 e7 8d a6 16 40 8f af 46 f3 e9 e8 ..p......@..F... 00:22:29.517 00000330 46 2c b6 ee 2e e5 b0 4d f2 a1 6b ea f0 c0 e1 e2 F,.....M..k..... 00:22:29.517 00000340 b4 d1 44 db 33 d4 ac 5b fc 72 a1 d7 30 e1 92 bf ..D.3..[.r..0... 00:22:29.517 00000350 3f 74 42 3a 82 cc 93 75 54 8c 1d 41 38 13 69 0b ?tB:...uT..A8.i. 00:22:29.517 00000360 d4 e6 b4 87 74 17 21 73 e7 24 f1 09 17 bc 56 7c ....t.!s.$....V| 00:22:29.517 00000370 72 38 28 aa a7 15 da 17 f6 97 be c1 48 12 1b d7 r8(.........H... 00:22:29.517 00000380 3e bd 88 f4 82 e1 4c 7d 58 c1 a8 35 ba f0 e6 3c >.....L}X..5...< 00:22:29.517 00000390 8a 73 69 f3 26 81 f1 77 5f 84 ac 63 8e a7 35 85 .si.&..w_..c..5. 00:22:29.517 000003a0 b1 2b 2f e8 ce c6 ce 72 a4 6e 5b d4 4e 30 43 c1 .+/....r.n[.N0C. 00:22:29.517 000003b0 4f 2b a0 1e 43 c3 41 6a cb a7 1d 32 09 0b a9 92 O+..C.Aj...2.... 00:22:29.518 000003c0 77 43 16 08 a2 25 27 6e ae 0b f6 ab 95 b7 47 bc wC...%'n......G. 00:22:29.518 000003d0 63 56 c4 64 3a 3b 2d 1f b9 44 02 7d eb b2 cf 17 cV.d:;-..D.}.... 00:22:29.518 000003e0 0e 4c ba e4 80 26 58 ae 12 48 ae 17 b5 e1 60 dc .L...&X..H....`. 00:22:29.518 000003f0 71 dd dd 3e b2 7b 89 23 7b b2 48 92 86 d9 15 bf q..>.{.#{.H..... 00:22:29.518 dh secret: 00:22:29.518 00000000 18 a2 1a fe 95 c2 a4 ed 6d d2 6f 2c 56 d0 82 21 ........m.o,V..! 00:22:29.518 00000010 86 53 fc 4c d6 8f 8a b4 44 a7 3a 92 25 3f 7a 7b .S.L....D.:.%?z{ 00:22:29.518 00000020 9e 68 9a 5b 60 19 4d ff 0a 5f 39 ca 50 7c 9d 5c .h.[`.M.._9.P|.\ 00:22:29.518 00000030 32 30 e1 a3 55 c9 28 7e a9 07 e0 de 72 de 4f 5f 20..U.(~....r.O_ 00:22:29.518 00000040 b1 c4 0a 10 d8 13 03 7c 96 63 b8 8b 35 ee 40 83 .......|.c..5.@. 00:22:29.518 00000050 86 fd fe 3b e7 bf 66 e6 5d f7 00 fe 94 8c 2f e3 ...;..f.]...../. 00:22:29.518 00000060 03 c2 37 86 3c 72 4d ec f6 15 b3 b6 a6 df 25 4c ..7....."|..v 00:22:29.518 00000080 4c 50 43 96 b9 90 f8 1c f9 b9 0f 08 53 78 10 60 LPC.........Sx.` 00:22:29.518 00000090 1e 93 8d 9b af fe 58 2c ef 18 a1 5f 0c 1b f9 9e ......X,..._.... 00:22:29.518 000000a0 71 8f 6a 02 f6 e1 5f 01 cc 9c 6d 1e e0 dd a7 ef q.j..._...m..... 00:22:29.518 000000b0 b2 fe 4d 5c 89 0e d9 8d c6 61 91 ba 6b 89 ea 1c ..M\.....a..k... 00:22:29.518 000000c0 27 43 3f f1 ef ae 5b d9 f5 0a 61 a9 16 b9 da 6b 'C?...[...a....k 00:22:29.518 000000d0 b5 4f 5a 15 5c 41 08 17 24 19 d9 26 35 e5 38 c8 .OZ.\A..$..&5.8. 00:22:29.518 000000e0 27 c9 47 5f 67 d5 86 df 0b fe 8f 50 df 52 7c 38 '.G_g......P.R|8 00:22:29.518 000000f0 fb d9 9c 51 9d c2 3d 41 1b b2 c2 23 82 81 11 d6 ...Q..=A...#.... 00:22:29.518 00000100 81 d6 91 e2 71 bd e8 fa 89 9c 83 4a a4 7c 85 be ....q......J.|.. 00:22:29.518 00000110 81 bf ca 48 30 90 e9 79 e5 b5 d2 66 d9 15 7e 45 ...H0..y...f..~E 00:22:29.518 00000120 2e a7 a8 d1 cc 10 b9 f5 81 b2 42 75 ab 39 f6 7a ..........Bu.9.z 00:22:29.518 00000130 7c 7d f5 a1 12 d6 ba c7 07 4a 9a 8e 94 ce 4a 2f |}.......J....J/ 00:22:29.518 00000140 7c 63 0c b4 9d a0 97 e5 31 6d be 8c b5 15 b6 4c |c......1m.....L 00:22:29.518 00000150 b7 10 fb dc 8d 8c 70 66 90 2a df 6f d7 1d 16 4d ......pf.*.o...M 00:22:29.518 00000160 cf d9 38 25 8f 92 6a 5a 8f e4 61 1d cb d7 31 d1 ..8%..jZ..a...1. 00:22:29.518 00000170 9b e0 0e 9e c3 37 fb 00 cf 06 5d cd d1 19 d3 ea .....7....]..... 00:22:29.518 00000180 c4 71 4f c7 49 91 31 3f d8 af 15 c7 f3 47 1c 1d .qO.I.1?.....G.. 00:22:29.518 00000190 a9 0f b6 3c 31 e8 ed 86 60 ad 43 4a 4f 64 8f 46 ...<1...`.CJOd.F 00:22:29.518 000001a0 d4 be df e5 d1 88 81 a7 0f 23 24 f9 ac 60 61 0c .........#$..`a. 00:22:29.518 000001b0 00 8d bd aa 67 69 70 e4 7b 2c c2 da 89 96 00 71 ....gip.{,.....q 00:22:29.518 000001c0 7d 06 c3 62 43 56 fc b3 15 65 df 5e 99 02 22 7e }..bCV...e.^.."~ 00:22:29.518 000001d0 ed 3b dc 36 dd e4 7d fa cc e7 3e 97 f9 3c 3e 85 .;.6..}...>..<>. 00:22:29.518 000001e0 b0 66 30 61 1a 52 70 41 1b ec 6b f7 ee 48 d3 0d .f0a.RpA..k..H.. 00:22:29.518 000001f0 d7 cc 6b 6e 4c 9b 83 63 91 ad b0 a4 49 7b 23 29 ..knL..c....I{#) 00:22:29.518 00000200 d3 70 12 53 7a 11 de 91 02 e9 a2 b2 98 76 44 e7 .p.Sz........vD. 00:22:29.518 00000210 82 fc 15 4e 04 93 61 1f 37 d9 29 44 c7 89 d6 6e ...N..a.7.)D...n 00:22:29.518 00000220 b5 49 e3 c2 83 11 43 15 39 fd 20 31 cc 6f 58 15 .I....C.9. 1.oX. 00:22:29.518 00000230 49 fe 37 93 c1 c2 e0 aa f6 8a cd e3 29 02 6a b5 I.7.........).j. 00:22:29.518 00000240 d8 8e 33 03 31 b7 b9 bc b9 ac cc ed 14 4f 4c c2 ..3.1........OL. 00:22:29.518 00000250 38 e0 83 88 3c 92 ac ca f0 61 a5 9d 67 c8 51 af 8...<....a..g.Q. 00:22:29.518 00000260 79 cd ef f1 a9 dc ae 41 fa 27 a7 e8 f4 95 2b d8 y......A.'....+. 00:22:29.518 00000270 46 c0 42 ed 9d 9f b6 36 6c f8 7b 48 f9 e7 dd 2a F.B....6l.{H...* 00:22:29.518 00000280 12 4c a9 00 01 e7 5f 9d 91 a3 c7 52 52 aa 03 17 .L...._....RR... 00:22:29.518 00000290 35 96 59 77 1d 73 c8 90 55 9c 34 86 ab 32 65 4b 5.Yw.s..U.4..2eK 00:22:29.518 000002a0 69 b0 10 0b da 92 da 9d 4c 88 05 0a c3 3b 5b 8c i.......L....;[. 00:22:29.518 000002b0 fb 69 54 f5 1f 80 3b 4d 73 bf d8 de 0e 5b 15 a8 .iT...;Ms....[.. 00:22:29.518 000002c0 1f f1 00 8a 21 16 11 c1 40 74 29 bd f2 c7 76 50 ....!...@t)...vP 00:22:29.518 000002d0 86 e9 c9 56 b6 32 03 59 e3 0c fc 94 4a 62 97 b7 ...V.2.Y....Jb.. 00:22:29.518 000002e0 49 2b f0 71 d8 6a d5 12 ac fe 55 30 73 1c 67 48 I+.q.j....U0s.gH 00:22:29.518 000002f0 57 31 04 f0 42 e4 05 3e 16 fc 19 42 0f 78 2b 72 W1..B..>...B.x+r 00:22:29.518 00000300 c2 92 53 70 42 9b 80 ee ab cd 73 10 9b e3 7a 30 ..SpB.....s...z0 00:22:29.518 00000310 aa c2 e6 29 03 0c 35 5b 5f 30 67 23 9b c9 bc dc ...)..5[_0g#.... 00:22:29.518 00000320 55 82 b0 85 4a b5 0f 5f 3e 67 45 8c 59 78 c9 6d U...J.._>gE.Yx.m 00:22:29.518 00000330 72 97 d2 38 69 a7 79 43 f3 ee a4 0e 21 82 5a c0 r..8i.yC....!.Z. 00:22:29.518 00000340 b0 77 36 cc d4 08 d7 2e af 25 ab 7a 4e 65 4f fc .w6......%.zNeO. 00:22:29.518 00000350 ef 2c c2 88 bd cf 1f eb 27 0c ad f8 88 ce e5 4b .,......'......K 00:22:29.518 00000360 8a 1c 17 69 0d c0 c1 b8 c3 1a f7 bb 3e 6d 8a 22 ...i........>m." 00:22:29.518 00000370 6d b0 9a 62 e8 2b 34 a2 6b 45 78 0b 69 bf a9 a9 m..b.+4.kEx.i... 00:22:29.518 00000380 f3 15 12 ae c8 3d 5f 12 7e 9e 3c 5b b8 13 01 7c .....=_.~.<[...| 00:22:29.518 00000390 c2 b9 f9 c2 ef b6 3c 7a ad 6d fe 42 b4 f3 8b 46 ........ 00:22:29.518 00000050 fd 8d c9 c1 7c e5 5b ad 89 69 59 59 d9 c4 30 df ....|.[..iYY..0. 00:22:29.518 00000060 18 80 aa e5 3a a1 dc 72 18 7d c6 48 09 a8 e3 8d ....:..r.}.H.... 00:22:29.518 00000070 38 10 0f 2e f9 5a 8c 33 00 05 b1 89 e2 a5 0a 38 8....Z.3.......8 00:22:29.518 00000080 98 84 03 fe e6 70 83 47 5e 69 79 88 6b a1 4e b5 .....p.G^iy.k.N. 00:22:29.518 00000090 f0 de ca 59 cb 79 2c 6e f1 cb 6a c7 9f 2b 3a 9d ...Y.y,n..j..+:. 00:22:29.518 000000a0 57 b8 21 36 55 b4 25 fd 89 28 e3 27 bc a6 74 f9 W.!6U.%..(.'..t. 00:22:29.518 000000b0 f0 5d 04 8a 61 ba 10 5e cf 41 9b 9e 57 fe 85 00 .]..a..^.A..W... 00:22:29.518 000000c0 aa 16 a4 2c 85 09 dc 05 d1 41 8c d7 0d bd 37 68 ...,.....A....7h 00:22:29.518 000000d0 2a 78 cf 86 9e 82 8c 88 32 64 23 18 66 a8 7f 99 *x......2d#.f... 00:22:29.518 000000e0 fd e1 d0 20 95 c0 0c ab f3 cd 7d 25 ea 57 54 c8 ... ......}%.WT. 00:22:29.518 000000f0 66 c3 31 c5 45 3f 8c 81 50 95 b5 04 33 3f 3e ac f.1.E?..P...3?>. 00:22:29.518 00000100 a7 96 e3 c1 e7 57 d1 7b a3 66 32 60 41 f8 bc a1 .....W.{.f2`A... 00:22:29.518 00000110 c9 ad 20 df e6 dd 95 42 24 f0 d5 4a 3d 33 ee 39 .. ....B$..J=3.9 00:22:29.518 00000120 e0 e9 44 0d ff 57 2e 6d 7f 5c 12 9a fe 38 59 ed ..D..W.m.\...8Y. 00:22:29.518 00000130 c4 bf 91 73 ff ac b4 fd 8e 13 0c 71 db 98 e1 e6 ...s.......q.... 00:22:29.518 00000140 42 a0 09 6d b9 fd 1d e2 0e db d1 fa ed b7 7a 52 B..m..........zR 00:22:29.518 00000150 fa b8 2a e6 ef 8b 77 be 42 69 5a 99 0d 38 bd 28 ..*...w.BiZ..8.( 00:22:29.518 00000160 17 9e a8 09 99 e3 79 c1 36 6b da 77 81 a1 5a e6 ......y.6k.w..Z. 00:22:29.518 00000170 cb 08 89 27 9c 01 f6 3b c7 ce c5 05 84 7a 46 3c ...'...;.....zF< 00:22:29.518 00000180 b0 fb aa cf da e5 56 e2 88 80 5b a3 11 22 4b 8b ......V...[.."K. 00:22:29.518 00000190 65 a7 d8 6f aa 3b 42 59 57 84 b0 7b 62 46 5a 18 e..o.;BYW..{bFZ. 00:22:29.518 000001a0 05 9e 32 7d de ae 3a ff 64 81 f5 e9 df 45 de 24 ..2}..:.d....E.$ 00:22:29.518 000001b0 bd 1f 93 83 c5 ee ca fa 58 03 f5 c4 eb d3 73 78 ........X.....sx 00:22:29.518 000001c0 5c d6 9a 8a 04 d8 33 69 0f 75 d9 85 ec 1f ab ef \.....3i.u...... 00:22:29.518 000001d0 11 ad c2 b2 d4 0a d7 13 5d b2 a1 13 85 54 29 df ........]....T). 00:22:29.518 000001e0 2d d2 58 c7 19 d9 21 5c 23 71 c4 f3 25 c4 a4 6c -.X...!\#q..%..l 00:22:29.518 000001f0 34 c3 8f a8 0b af 15 23 ac 6b cc 93 be a0 60 ab 4......#.k....`. 00:22:29.518 00000200 6c 16 0a 98 2b 9a 0c 79 32 d7 e8 9e 67 a3 05 b3 l...+..y2...g... 00:22:29.518 00000210 a9 47 71 47 7f 55 5f 19 cf 4f cf a5 5d e0 b0 84 .GqG.U_..O..]... 00:22:29.518 00000220 71 2d d6 a7 01 3b 34 88 06 de b3 0d 3b d3 ba df q-...;4.....;... 00:22:29.518 00000230 1a 04 36 19 e6 62 1c ea 77 2e 92 b5 b8 4b f5 45 ..6..b..w....K.E 00:22:29.518 00000240 87 e4 e2 f6 ea 37 d5 1c 40 76 c0 22 48 15 57 04 .....7..@v."H.W. 00:22:29.518 00000250 b2 9e c3 a5 c4 13 3e 47 46 a6 a1 e6 dc c4 0d 43 ......>GF......C 00:22:29.518 00000260 d4 cd 4b ea 3b 72 01 93 fa 9a 6b 34 54 21 e0 f4 ..K.;r....k4T!.. 00:22:29.518 00000270 ff ad 1d 41 26 99 8d 31 1c f6 55 25 34 a4 a6 3b ...A&..1..U%4..; 00:22:29.518 00000280 dc ef b4 b3 c2 7f 65 24 bb a5 9a 53 41 d3 4a 3a ......e$...SA.J: 00:22:29.518 00000290 c9 d1 78 09 31 10 57 97 eb 4f 10 1c 4a c4 22 fa ..x.1.W..O..J.". 00:22:29.518 000002a0 bb 3c 32 18 86 7d 1c bb 12 9e c0 ce 35 4b e0 eb .<2..}......5K.. 00:22:29.518 000002b0 1e 5a d3 f5 17 f2 4c 82 99 e3 fc ad ed 63 a3 14 .Z....L......c.. 00:22:29.518 000002c0 28 07 e9 bd bc 0b 34 58 41 51 0d 65 48 79 fc 23 (.....4XAQ.eHy.# 00:22:29.518 000002d0 54 78 10 1f c8 85 5e 2b 03 24 2a aa 01 a0 63 73 Tx....^+.$*...cs 00:22:29.518 000002e0 ca c4 fc 49 e1 29 1d 0c f4 39 88 0a 24 54 ea 68 ...I.)...9..$T.h 00:22:29.518 000002f0 67 f1 81 0b 00 6d 4e cf b5 ca 83 4b 4a 5b e5 d1 g....mN....KJ[.. 00:22:29.518 00000300 d2 02 d9 f3 23 69 ee 64 ab d9 ea 0e 3f df a2 4e ....#i.d....?..N 00:22:29.518 00000310 2e 34 d8 60 4c 01 26 1a 02 2d 38 d0 d7 2b ba c1 .4.`L.&..-8..+.. 00:22:29.518 00000320 b6 b4 89 6a 06 43 b3 9f 83 d0 b9 5f e5 07 c9 ad ...j.C....._.... 00:22:29.519 00000330 6c 91 75 3b 3d ff 42 e0 bf 96 36 2f d3 73 eb 97 l.u;=.B...6/.s.. 00:22:29.519 00000340 87 62 31 6a 2a e5 97 9a f9 e7 b3 a7 53 90 58 58 .b1j*.......S.XX 00:22:29.519 00000350 ae 5b f9 2c 8c d3 aa 1b ee a6 9d 2b eb 64 5e dd .[.,.......+.d^. 00:22:29.519 00000360 4d 7c 4c e9 21 7f 97 bf 08 b6 c1 e5 e1 cf d7 82 M|L.!........... 00:22:29.519 00000370 51 28 69 d2 d0 05 9a 4c 7d 2f d5 b3 af 5f 3b 16 Q(i....L}/..._;. 00:22:29.519 00000380 5b 0f 62 69 6f cb 06 28 d2 b5 a1 d3 11 02 0c dd [.bio..(........ 00:22:29.519 00000390 5f 85 4b 83 05 82 52 39 c9 89 c9 83 77 ee 59 d3 _.K...R9....w.Y. 00:22:29.519 000003a0 f9 ad 6a 89 22 bc c7 b1 49 cf 40 47 c3 d1 47 53 ..j."...I.@G..GS 00:22:29.519 000003b0 34 e1 7a 60 a5 d0 c2 36 91 0e 7a df ce a5 30 ac 4.z`...6..z...0. 00:22:29.519 000003c0 ed 5f 54 c5 33 30 3c d7 bb 21 aa fd 68 9d 90 cd ._T.30<..!..h... 00:22:29.519 000003d0 68 9a ce 55 42 b0 7c b7 f0 13 e0 65 62 32 d1 e8 h..UB.|....eb2.. 00:22:29.519 000003e0 cd 11 87 7f fa 50 85 2c 08 05 e8 a5 cd 91 42 fd .....P.,......B. 00:22:29.519 000003f0 51 2f 73 3a 7b b7 50 e2 5f 5f aa f0 60 60 cc 24 Q/s:{.P.__..``.$ 00:22:29.519 host pubkey: 00:22:29.519 00000000 a6 a8 99 ce bf 1e 79 b8 b7 56 d4 6e 9c 00 1a 0c ......y..V.n.... 00:22:29.519 00000010 e7 2f a1 37 8a 7b 56 32 a9 29 f7 d9 e9 7e 83 60 ./.7.{V2.)...~.` 00:22:29.519 00000020 b5 1d d9 22 a8 69 90 af bc 9d 68 4a 04 47 b5 52 ...".i....hJ.G.R 00:22:29.519 00000030 ec 77 14 3b b1 50 3c 87 9c 5d 87 ce a9 35 17 b7 .w.;.P<..]...5.. 00:22:29.519 00000040 37 71 e8 73 68 ee 46 00 50 62 f7 8f c7 14 5e f1 7q.sh.F.Pb....^. 00:22:29.519 00000050 ad 76 d7 65 ba 55 df 40 cf 0c 76 61 e6 37 72 37 .v.e.U.@..va.7r7 00:22:29.519 00000060 98 72 6f 6b 36 0f 8c 05 40 06 c7 8e 33 55 35 36 .rok6...@...3U56 00:22:29.519 00000070 02 a4 3f aa d8 d4 0f dd 30 88 a6 2f 8a 27 48 81 ..?.....0../.'H. 00:22:29.519 00000080 31 98 0d 04 c3 bb 6d 71 95 57 ed 26 66 7d 70 d6 1.....mq.W.&f}p. 00:22:29.519 00000090 82 24 53 d9 1d 4b b7 08 57 cc 68 cc 55 76 75 31 .$S..K..W.h.Uvu1 00:22:29.519 000000a0 dc 5c 61 53 ca c6 35 0f 8c 75 6d 30 24 52 3c 61 .\aS..5..um0$R.1...x(.F..2 00:22:29.519 00000180 95 a9 3a b6 a7 49 65 3d ae ad 3e 18 f2 bb ab cb ..:..Ie=..>..... 00:22:29.519 00000190 02 e0 cd 02 6e e6 06 0d 1e 66 22 1b 4a 59 29 04 ....n....f".JY). 00:22:29.519 000001a0 6d 08 88 9e 1a 2e 4f b0 74 00 e6 ae e6 c0 a9 ab m.....O.t....... 00:22:29.519 000001b0 52 74 13 5b 1d 07 9b 7f 18 93 4e fb 38 1f e6 32 Rt.[......N.8..2 00:22:29.519 000001c0 24 15 2e 5a ce c8 04 5a f0 f8 e9 d3 46 91 1e 03 $..Z...Z....F... 00:22:29.519 000001d0 45 ed e7 36 3b dc a2 6b ed 45 2e 66 6c 0e 91 67 E..6;..k.E.fl..g 00:22:29.519 000001e0 7c fb 85 87 5d 97 62 36 bb 35 7d 66 fd bf 55 9d |...].b6.5}f..U. 00:22:29.519 000001f0 8e c3 f7 4b 71 d3 5f 4e 65 9a d0 63 58 af 70 18 ...Kq._Ne..cX.p. 00:22:29.519 00000200 28 3c 76 b1 1f 62 07 d8 22 ba 80 c2 f6 4f 54 05 (....4..'.... 00:22:29.519 00000370 5a 2e 5e f5 06 94 83 39 8e 35 4b ee 92 c0 49 08 Z.^....9.5K...I. 00:22:29.519 00000380 35 67 fa 1e 70 a7 d4 59 cb e6 8d 20 38 26 61 8c 5g..p..Y... 8&a. 00:22:29.519 00000390 88 bf df 31 45 fb 55 f8 9c 4f f8 a6 81 47 f0 98 ...1E.U..O...G.. 00:22:29.519 000003a0 b9 a0 36 d6 84 e6 77 c5 85 b5 af 4d d4 e4 fc 07 ..6...w....M.... 00:22:29.519 000003b0 38 d5 49 f8 91 9f a4 b6 d7 6a f2 5f 44 13 89 15 8.I......j._D... 00:22:29.519 000003c0 bd 88 55 64 39 2f 11 79 c4 9f 9f ab 33 40 0a 76 ..Ud9/.y....3@.v 00:22:29.519 000003d0 fb 74 df ce 7b f1 b5 1d ce ef 67 76 ec 1d d4 8c .t..{.....gv.... 00:22:29.519 000003e0 fc e5 c0 5c bd 66 42 b0 a5 e5 2b 37 67 b3 40 4d ...\.fB...+7g.@M 00:22:29.519 000003f0 a3 95 b5 89 8f ec 3b e3 35 c5 43 62 81 d8 cd 63 ......;.5.Cb...c 00:22:29.519 dh secret: 00:22:29.519 00000000 85 c6 e8 b6 0f 70 51 0f d0 1e e7 b8 fe a4 e7 22 .....pQ........" 00:22:29.519 00000010 61 4d 5e 62 7c aa 10 fe df 9d d8 11 c8 8d 73 83 aM^b|.........s. 00:22:29.519 00000020 35 ca 36 48 00 4a a0 6e 10 c9 01 91 d2 b3 17 30 5.6H.J.n.......0 00:22:29.519 00000030 17 b6 1b eb 32 c0 ab b2 35 80 af 4b db cc 72 04 ....2...5..K..r. 00:22:29.519 00000040 f3 e6 2c 07 62 24 8e e8 cc ac a7 e6 66 06 8e 03 ..,.b$......f... 00:22:29.519 00000050 37 83 0a db e3 06 03 68 0c ae 90 2e 48 f6 f3 84 7......h....H... 00:22:29.519 00000060 ee 84 59 6c 41 4f 33 59 f7 58 fb ac fa fd fd ab ..YlAO3Y.X...... 00:22:29.519 00000070 55 42 9b 42 8e a9 2b f7 6a 9f 24 8c c1 8b 4b c3 UB.B..+.j.$...K. 00:22:29.519 00000080 7f 81 46 94 c2 ab a6 06 17 2a c3 0f 10 b8 e4 88 ..F......*...... 00:22:29.519 00000090 6b 69 7b e9 cc 35 55 3d 99 e1 c1 ee 5c 9b 8e 6c ki{..5U=....\..l 00:22:29.519 000000a0 60 e5 6f d2 f4 25 08 e7 fb dd 0f de 80 42 5b 56 `.o..%.......B[V 00:22:29.519 000000b0 a4 8a 5a 95 fc 16 79 e8 dd 62 d6 80 b4 7d 02 07 ..Z...y..b...}.. 00:22:29.519 000000c0 1e 87 ca d8 7d d9 ae ae 59 7a 55 2a 40 fc 6c 06 ....}...YzU*@.l. 00:22:29.519 000000d0 03 9f f3 ed ee 49 06 b6 4b e4 15 51 5b 41 54 9f .....I..K..Q[AT. 00:22:29.519 000000e0 12 98 99 11 70 60 80 22 6e 3f e2 f9 18 09 45 7f ....p`."n?....E. 00:22:29.519 000000f0 30 33 86 5a 6c 1d 78 fe 65 a5 10 54 17 30 e8 71 03.Zl.x.e..T.0.q 00:22:29.519 00000100 05 50 3c aa 86 5a 20 a8 6b bb 27 ca 6d 8d 3b b0 .P<..Z .k.'.m.;. 00:22:29.519 00000110 26 9a 10 74 10 d7 e0 8f 8c 32 07 6b c0 96 86 3c &..t.....2.k...< 00:22:29.519 00000120 83 e7 61 57 4f 09 fc 1b 68 a2 cc 17 87 9e 49 4c ..aWO...h.....IL 00:22:29.519 00000130 04 86 a0 6c ca 27 46 37 fd 89 1f 3e 6e f8 98 b7 ...l.'F7...>n... 00:22:29.519 00000140 bb 2c fd 1f ad 18 4a e4 78 8e e8 8f e5 23 c4 d8 .,....J.x....#.. 00:22:29.519 00000150 45 12 f2 90 bd 70 a4 a8 d7 6d 73 ea f5 5c 6d 40 E....p...ms..\m@ 00:22:29.519 00000160 be 94 0e 83 a5 a2 56 01 ca 86 b3 20 ce ed 8f d0 ......V.... .... 00:22:29.519 00000170 5b 8e ab 31 13 8b 76 6c f2 73 63 2d 6a 05 cd 60 [..1..vl.sc-j..` 00:22:29.519 00000180 88 63 d3 4a ac 17 a3 2a bf d8 d0 2d fc 94 b7 b8 .c.J...*...-.... 00:22:29.519 00000190 18 5e df d5 01 9f dd 38 4e d5 d2 8f 76 62 14 af .^.....8N...vb.. 00:22:29.519 000001a0 79 14 4f b7 bf 30 f8 e1 69 f7 9e 99 ab 96 db 93 y.O..0..i....... 00:22:29.519 000001b0 1b 33 34 3c c7 8b 56 af 48 d6 6b 1a 67 16 12 ab .34<..V.H.k.g... 00:22:29.519 000001c0 ba 01 d7 b4 42 f4 6c 41 7f 56 2f 47 75 d3 fd d4 ....B.lA.V/Gu... 00:22:29.519 000001d0 40 67 8f 94 73 b1 50 b0 4c 46 38 8b 57 37 e8 25 @g..s.P.LF8.W7.% 00:22:29.519 000001e0 0e e0 e8 79 07 aa 38 62 08 6d 7e ea 54 84 25 ca ...y..8b.m~.T.%. 00:22:29.519 000001f0 20 1b 50 e1 c5 59 0b 95 f1 a0 76 fe 31 8a 7d 0f .P..Y....v.1.}. 00:22:29.519 00000200 05 2e 8c f5 9a 4e d0 34 3a a5 10 72 9f 41 90 98 .....N.4:..r.A.. 00:22:29.519 00000210 55 da f9 fe 0b 6e c6 be ee 6a 59 20 6a c9 43 c8 U....n...jY j.C. 00:22:29.519 00000220 02 61 2c a9 a5 e5 c4 f1 d1 aa bf ba 5c 01 e7 be .a,.........\... 00:22:29.519 00000230 83 a8 6a af 68 89 05 fc 5a 1d 7a 1a c0 d6 5e 37 ..j.h...Z.z...^7 00:22:29.519 00000240 1d e3 75 e0 df a5 56 35 c7 71 1e c3 d5 97 a0 f5 ..u...V5.q...... 00:22:29.519 00000250 b9 e7 61 30 26 3c cf 00 72 88 c2 12 ad a9 a6 67 ..a0&<..r......g 00:22:29.519 00000260 80 83 77 fd 03 0f 12 f4 c4 30 44 9d 6a be 85 d0 ..w......0D.j... 00:22:29.519 00000270 fa 3a f3 ee 1b c7 f0 9c 3d c3 4d ac be 90 5a d1 .:......=.M...Z. 00:22:29.519 00000280 55 65 09 9c ff 4e 33 4c c6 a4 eb 7f 0d e3 16 d6 Ue...N3L........ 00:22:29.519 00000290 48 06 77 e1 7a c3 a3 9d 0b 16 d8 cd b3 ed 7c 2a H.w.z.........|* 00:22:29.519 000002a0 0f 8d 50 6c 93 c1 71 9a 40 2e 67 d3 93 cb fa 11 ..Pl..q.@.g..... 00:22:29.519 000002b0 6d 55 b3 df b6 fd 17 7d 91 2a 42 ad b6 19 8c 10 mU.....}.*B..... 00:22:29.519 000002c0 be e0 b1 5a 45 d6 42 75 7f e3 76 23 47 54 6a 8f ...ZE.Bu..v#GTj. 00:22:29.519 000002d0 16 92 5e 3d 1d 87 52 ce cc 7d 9d c6 d2 57 c9 34 ..^=..R..}...W.4 00:22:29.519 000002e0 18 82 41 03 85 52 7c 41 a3 f5 88 9f 74 d7 69 70 ..A..R|A....t.ip 00:22:29.519 000002f0 91 e9 06 51 9c d1 00 01 70 fe f1 78 d7 49 e4 a2 ...Q....p..x.I.. 00:22:29.519 00000300 98 ea 86 72 d9 f0 4f b2 0e 7e a6 4c e6 ae 74 be ...r..O..~.L..t. 00:22:29.519 00000310 c9 1e ca 3e c5 84 93 69 5c f8 53 61 54 77 49 1e ...>...i\.SaTwI. 00:22:29.519 00000320 90 c4 12 be 21 74 73 fb b8 ee 9a 99 52 bb c4 22 ....!ts.....R.." 00:22:29.519 00000330 bb 48 69 f6 57 07 62 7d f9 b7 a8 1a 2f aa 6b 49 .Hi.W.b}..../.kI 00:22:29.519 00000340 11 c8 5a 37 57 df 49 1b 70 11 2d 55 4b 46 45 7b ..Z7W.I.p.-UKFE{ 00:22:29.519 00000350 3d 0d b4 bd ee 96 84 19 aa 90 a5 df 15 a6 87 1b =............... 00:22:29.519 00000360 91 dd 7e 09 00 6c f3 71 e4 27 6c fe 1b a0 6a a5 ..~..l.q.'l...j. 00:22:29.519 00000370 f9 b9 7a 26 08 36 7e eb fe 22 d0 48 d7 24 d0 3f ..z&.6~..".H.$.? 00:22:29.519 00000380 d0 26 e8 ee 35 57 1a 92 94 c2 c0 1c 93 57 41 b1 .&..5W.......WA. 00:22:29.519 00000390 32 43 4c a7 2f 31 ca 19 24 df a8 20 25 57 14 6d 2CL./1..$.. %W.m 00:22:29.519 000003a0 de fe 03 84 10 f8 e9 47 d0 8f 26 a4 65 c4 08 2b .......G..&.e..+ 00:22:29.519 000003b0 f0 1a c0 43 b7 91 63 66 2f 06 b2 fa 40 74 89 33 ...C..cf/...@t.3 00:22:29.519 000003c0 1a 8b 45 e1 df d6 38 e3 16 e1 56 ac 99 00 ed 47 ..E...8...V....G 00:22:29.519 000003d0 28 3c 8a 9c fc 2c c5 e0 4c d1 27 c0 a8 d8 8e 03 (<...,..L.'..... 00:22:29.519 000003e0 4c 1f 07 a2 3e 23 a0 0c e0 16 2d b5 c4 26 93 52 L...>#....-..&.R 00:22:29.520 000003f0 58 47 ad 22 d7 c3 17 d2 14 be de 31 d1 5b 7d 35 XG.".......1.[}5 00:22:29.520 [2024-09-27 13:27:18.684198] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=2, dhgroup=5, seq=3775755265, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.520 [2024-09-27 13:27:18.684644] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.520 [2024-09-27 13:27:18.772301] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.520 [2024-09-27 13:27:18.772786] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.520 [2024-09-27 13:27:18.773021] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.520 [2024-09-27 13:27:18.773366] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.520 [2024-09-27 13:27:18.825170] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.520 [2024-09-27 13:27:18.825438] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.520 [2024-09-27 13:27:18.825649] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 5 (ffdhe8192) 00:22:29.520 [2024-09-27 13:27:18.825870] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.520 [2024-09-27 13:27:18.826257] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.520 ctrlr pubkey: 00:22:29.520 00000000 eb a4 d5 10 e9 76 be 17 65 04 47 d0 10 56 1d a8 .....v..e.G..V.. 00:22:29.520 00000010 76 82 24 1c 76 13 54 14 13 2a f1 d0 66 f1 0c 64 v.$.v.T..*..f..d 00:22:29.520 00000020 a9 01 f5 a3 85 3c ec 32 57 2e 5e 8b 9f 62 82 59 .....<.2W.^..b.Y 00:22:29.520 00000030 f2 c2 ba 31 60 c8 e3 6c 0c 19 9f 9a cf c5 b5 30 ...1`..l.......0 00:22:29.520 00000040 9a 0d 5e 32 00 8a fa a8 a1 60 42 dd e7 3e ad ab ..^2.....`B..>.. 00:22:29.520 00000050 fd 8d c9 c1 7c e5 5b ad 89 69 59 59 d9 c4 30 df ....|.[..iYY..0. 00:22:29.520 00000060 18 80 aa e5 3a a1 dc 72 18 7d c6 48 09 a8 e3 8d ....:..r.}.H.... 00:22:29.520 00000070 38 10 0f 2e f9 5a 8c 33 00 05 b1 89 e2 a5 0a 38 8....Z.3.......8 00:22:29.520 00000080 98 84 03 fe e6 70 83 47 5e 69 79 88 6b a1 4e b5 .....p.G^iy.k.N. 00:22:29.520 00000090 f0 de ca 59 cb 79 2c 6e f1 cb 6a c7 9f 2b 3a 9d ...Y.y,n..j..+:. 00:22:29.520 000000a0 57 b8 21 36 55 b4 25 fd 89 28 e3 27 bc a6 74 f9 W.!6U.%..(.'..t. 00:22:29.520 000000b0 f0 5d 04 8a 61 ba 10 5e cf 41 9b 9e 57 fe 85 00 .]..a..^.A..W... 00:22:29.520 000000c0 aa 16 a4 2c 85 09 dc 05 d1 41 8c d7 0d bd 37 68 ...,.....A....7h 00:22:29.520 000000d0 2a 78 cf 86 9e 82 8c 88 32 64 23 18 66 a8 7f 99 *x......2d#.f... 00:22:29.520 000000e0 fd e1 d0 20 95 c0 0c ab f3 cd 7d 25 ea 57 54 c8 ... ......}%.WT. 00:22:29.520 000000f0 66 c3 31 c5 45 3f 8c 81 50 95 b5 04 33 3f 3e ac f.1.E?..P...3?>. 00:22:29.520 00000100 a7 96 e3 c1 e7 57 d1 7b a3 66 32 60 41 f8 bc a1 .....W.{.f2`A... 00:22:29.520 00000110 c9 ad 20 df e6 dd 95 42 24 f0 d5 4a 3d 33 ee 39 .. ....B$..J=3.9 00:22:29.520 00000120 e0 e9 44 0d ff 57 2e 6d 7f 5c 12 9a fe 38 59 ed ..D..W.m.\...8Y. 00:22:29.520 00000130 c4 bf 91 73 ff ac b4 fd 8e 13 0c 71 db 98 e1 e6 ...s.......q.... 00:22:29.520 00000140 42 a0 09 6d b9 fd 1d e2 0e db d1 fa ed b7 7a 52 B..m..........zR 00:22:29.520 00000150 fa b8 2a e6 ef 8b 77 be 42 69 5a 99 0d 38 bd 28 ..*...w.BiZ..8.( 00:22:29.520 00000160 17 9e a8 09 99 e3 79 c1 36 6b da 77 81 a1 5a e6 ......y.6k.w..Z. 00:22:29.520 00000170 cb 08 89 27 9c 01 f6 3b c7 ce c5 05 84 7a 46 3c ...'...;.....zF< 00:22:29.520 00000180 b0 fb aa cf da e5 56 e2 88 80 5b a3 11 22 4b 8b ......V...[.."K. 00:22:29.520 00000190 65 a7 d8 6f aa 3b 42 59 57 84 b0 7b 62 46 5a 18 e..o.;BYW..{bFZ. 00:22:29.520 000001a0 05 9e 32 7d de ae 3a ff 64 81 f5 e9 df 45 de 24 ..2}..:.d....E.$ 00:22:29.520 000001b0 bd 1f 93 83 c5 ee ca fa 58 03 f5 c4 eb d3 73 78 ........X.....sx 00:22:29.520 000001c0 5c d6 9a 8a 04 d8 33 69 0f 75 d9 85 ec 1f ab ef \.....3i.u...... 00:22:29.520 000001d0 11 ad c2 b2 d4 0a d7 13 5d b2 a1 13 85 54 29 df ........]....T). 00:22:29.520 000001e0 2d d2 58 c7 19 d9 21 5c 23 71 c4 f3 25 c4 a4 6c -.X...!\#q..%..l 00:22:29.520 000001f0 34 c3 8f a8 0b af 15 23 ac 6b cc 93 be a0 60 ab 4......#.k....`. 00:22:29.520 00000200 6c 16 0a 98 2b 9a 0c 79 32 d7 e8 9e 67 a3 05 b3 l...+..y2...g... 00:22:29.520 00000210 a9 47 71 47 7f 55 5f 19 cf 4f cf a5 5d e0 b0 84 .GqG.U_..O..]... 00:22:29.520 00000220 71 2d d6 a7 01 3b 34 88 06 de b3 0d 3b d3 ba df q-...;4.....;... 00:22:29.520 00000230 1a 04 36 19 e6 62 1c ea 77 2e 92 b5 b8 4b f5 45 ..6..b..w....K.E 00:22:29.520 00000240 87 e4 e2 f6 ea 37 d5 1c 40 76 c0 22 48 15 57 04 .....7..@v."H.W. 00:22:29.520 00000250 b2 9e c3 a5 c4 13 3e 47 46 a6 a1 e6 dc c4 0d 43 ......>GF......C 00:22:29.520 00000260 d4 cd 4b ea 3b 72 01 93 fa 9a 6b 34 54 21 e0 f4 ..K.;r....k4T!.. 00:22:29.520 00000270 ff ad 1d 41 26 99 8d 31 1c f6 55 25 34 a4 a6 3b ...A&..1..U%4..; 00:22:29.520 00000280 dc ef b4 b3 c2 7f 65 24 bb a5 9a 53 41 d3 4a 3a ......e$...SA.J: 00:22:29.520 00000290 c9 d1 78 09 31 10 57 97 eb 4f 10 1c 4a c4 22 fa ..x.1.W..O..J.". 00:22:29.520 000002a0 bb 3c 32 18 86 7d 1c bb 12 9e c0 ce 35 4b e0 eb .<2..}......5K.. 00:22:29.520 000002b0 1e 5a d3 f5 17 f2 4c 82 99 e3 fc ad ed 63 a3 14 .Z....L......c.. 00:22:29.520 000002c0 28 07 e9 bd bc 0b 34 58 41 51 0d 65 48 79 fc 23 (.....4XAQ.eHy.# 00:22:29.520 000002d0 54 78 10 1f c8 85 5e 2b 03 24 2a aa 01 a0 63 73 Tx....^+.$*...cs 00:22:29.520 000002e0 ca c4 fc 49 e1 29 1d 0c f4 39 88 0a 24 54 ea 68 ...I.)...9..$T.h 00:22:29.520 000002f0 67 f1 81 0b 00 6d 4e cf b5 ca 83 4b 4a 5b e5 d1 g....mN....KJ[.. 00:22:29.520 00000300 d2 02 d9 f3 23 69 ee 64 ab d9 ea 0e 3f df a2 4e ....#i.d....?..N 00:22:29.520 00000310 2e 34 d8 60 4c 01 26 1a 02 2d 38 d0 d7 2b ba c1 .4.`L.&..-8..+.. 00:22:29.520 00000320 b6 b4 89 6a 06 43 b3 9f 83 d0 b9 5f e5 07 c9 ad ...j.C....._.... 00:22:29.520 00000330 6c 91 75 3b 3d ff 42 e0 bf 96 36 2f d3 73 eb 97 l.u;=.B...6/.s.. 00:22:29.520 00000340 87 62 31 6a 2a e5 97 9a f9 e7 b3 a7 53 90 58 58 .b1j*.......S.XX 00:22:29.520 00000350 ae 5b f9 2c 8c d3 aa 1b ee a6 9d 2b eb 64 5e dd .[.,.......+.d^. 00:22:29.520 00000360 4d 7c 4c e9 21 7f 97 bf 08 b6 c1 e5 e1 cf d7 82 M|L.!........... 00:22:29.520 00000370 51 28 69 d2 d0 05 9a 4c 7d 2f d5 b3 af 5f 3b 16 Q(i....L}/..._;. 00:22:29.520 00000380 5b 0f 62 69 6f cb 06 28 d2 b5 a1 d3 11 02 0c dd [.bio..(........ 00:22:29.520 00000390 5f 85 4b 83 05 82 52 39 c9 89 c9 83 77 ee 59 d3 _.K...R9....w.Y. 00:22:29.520 000003a0 f9 ad 6a 89 22 bc c7 b1 49 cf 40 47 c3 d1 47 53 ..j."...I.@G..GS 00:22:29.520 000003b0 34 e1 7a 60 a5 d0 c2 36 91 0e 7a df ce a5 30 ac 4.z`...6..z...0. 00:22:29.520 000003c0 ed 5f 54 c5 33 30 3c d7 bb 21 aa fd 68 9d 90 cd ._T.30<..!..h... 00:22:29.520 000003d0 68 9a ce 55 42 b0 7c b7 f0 13 e0 65 62 32 d1 e8 h..UB.|....eb2.. 00:22:29.520 000003e0 cd 11 87 7f fa 50 85 2c 08 05 e8 a5 cd 91 42 fd .....P.,......B. 00:22:29.520 000003f0 51 2f 73 3a 7b b7 50 e2 5f 5f aa f0 60 60 cc 24 Q/s:{.P.__..``.$ 00:22:29.520 host pubkey: 00:22:29.520 00000000 88 42 b8 bf db f1 52 f4 b8 60 7d 2d 12 ef f4 f6 .B....R..`}-.... 00:22:29.520 00000010 00 21 63 9c 1e 30 55 3e ca 6d 22 41 3c 32 04 51 .!c..0U>.m"A<2.Q 00:22:29.520 00000020 44 f7 e3 13 41 84 da cc ae 69 fb 17 c2 d4 9d a6 D...A....i...... 00:22:29.520 00000030 b2 53 b1 14 5f ec 74 2e d7 65 c7 0f 74 59 cd 90 .S.._.t..e..tY.. 00:22:29.520 00000040 c3 f7 f2 0d 95 0e cd d6 f8 4f 55 c6 08 e6 02 18 .........OU..... 00:22:29.520 00000050 8a da 24 94 87 a3 ad 7c 41 3c d1 1a 58 13 29 48 ..$....|A<..X.)H 00:22:29.520 00000060 79 b2 2b 10 66 78 86 50 43 a9 75 b9 be 07 19 24 y.+.fx.PC.u....$ 00:22:29.520 00000070 98 be e4 7c 99 75 20 cf af 7d bb c0 a9 f8 c4 65 ...|.u ..}.....e 00:22:29.520 00000080 60 b5 c5 ed 8c 80 af 45 32 f0 a7 5a 7e 8e bb 12 `......E2..Z~... 00:22:29.520 00000090 1d 35 06 e3 2c f1 b4 51 1e 21 d7 82 07 b7 0b 81 .5..,..Q.!...... 00:22:29.520 000000a0 ce 4a 59 57 6e 9b 20 fd 34 80 c0 4a 2c 0c cd 08 .JYWn. .4..J,... 00:22:29.520 000000b0 0f 22 0a 26 49 87 e3 d9 fa 07 2f 22 d2 e4 d9 24 .".&I...../"...$ 00:22:29.520 000000c0 26 e7 49 c7 87 36 e2 1b b7 91 82 d4 19 b8 55 31 &.I..6........U1 00:22:29.520 000000d0 f3 49 c3 be 0c fe da e2 e4 ae ad 49 e5 46 0a 55 .I.........I.F.U 00:22:29.520 000000e0 21 6e 84 2f 1b 0b c0 4d e6 c3 c7 53 11 88 0a 47 !n./...M...S...G 00:22:29.520 000000f0 e1 91 c4 fd ac f3 1e 3b 0b e9 36 73 d0 11 b1 37 .......;..6s...7 00:22:29.520 00000100 3d 6a 60 ef 67 0d 92 7d 78 28 78 35 c8 0e c8 4a =j`.g..}x(x5...J 00:22:29.520 00000110 67 66 ba 83 63 79 6d 49 1d 34 14 02 db a0 85 7e gf..cymI.4.....~ 00:22:29.520 00000120 ab a5 79 b9 24 e9 af f6 56 5c 04 ee 1f 8e 6a be ..y.$...V\....j. 00:22:29.520 00000130 7d 6d a1 25 e9 e9 eb 0b 6b ed fc 68 3f 44 b4 8a }m.%....k..h?D.. 00:22:29.520 00000140 b8 28 29 dc e9 81 67 40 9e c7 bf d4 11 10 a8 b5 .()...g@........ 00:22:29.520 00000150 b8 7f 52 27 64 41 2b a0 bf 3d e7 6d 14 42 20 b1 ..R'dA+..=.m.B . 00:22:29.520 00000160 e1 29 93 90 c6 b1 81 9e e3 16 c0 a7 af 15 04 8e .).............. 00:22:29.520 00000170 e3 65 a7 f8 e7 4a f7 e1 06 b5 79 79 2a 52 0a 49 .e...J....yy*R.I 00:22:29.520 00000180 9e 55 6c 31 70 6c d1 79 65 e6 cf d2 0a 7e 3f c5 .Ul1pl.ye....~?. 00:22:29.520 00000190 68 e6 e2 62 25 4d 34 3c 41 b5 c7 ad 49 ad ef 53 h..b%M4........x., 00:22:29.520 000001d0 f5 7c 51 69 ff 02 7b c2 13 75 96 be 72 ca 61 6f .|Qi..{..u..r.ao 00:22:29.520 000001e0 fe 85 49 47 b1 f8 97 81 8f db 31 63 37 fc c1 29 ..IG......1c7..) 00:22:29.520 000001f0 21 8d 23 c0 57 b1 9e a4 5f df 66 f4 a9 21 0c a1 !.#.W..._.f..!.. 00:22:29.520 00000200 30 e0 02 c8 01 ec 21 9b 29 fd 61 5b 74 07 6d cc 0.....!.).a[t.m. 00:22:29.520 00000210 d9 b3 07 19 c4 96 e4 c3 7b 06 62 b1 a6 41 61 d3 ........{.b..Aa. 00:22:29.520 00000220 4e 55 89 69 b6 11 d5 42 84 35 c4 74 58 eb 13 59 NU.i...B.5.tX..Y 00:22:29.520 00000230 cb 3c cd 2c d5 84 d8 7d bb 37 7c 84 96 32 72 a7 .<.,...}.7|..2r. 00:22:29.520 00000240 43 b9 3b 12 4b 45 e8 99 fb 9d 94 26 e4 de 45 65 C.;.KE.....&..Ee 00:22:29.520 00000250 fa 12 08 55 76 4f 35 d7 95 f8 51 5f 51 84 64 54 ...UvO5...Q_Q.dT 00:22:29.520 00000260 d2 11 3b 28 f7 b4 3b 11 e6 84 b7 1c 26 5d ca 07 ..;(..;.....&].. 00:22:29.520 00000270 df 8e f9 8f 1d b7 2e d9 80 cf 7a d7 1b 49 bf 42 ..........z..I.B 00:22:29.521 00000280 df c3 44 2c 0c 48 f3 fe 0f 02 37 5c ca b8 73 3d ..D,.H....7\..s= 00:22:29.521 00000290 c2 44 0d 7a e4 c7 53 12 a1 cd 1c 12 cb 0f 9b ff .D.z..S......... 00:22:29.521 000002a0 ad 29 c8 30 70 ac 76 9b a3 b6 cd b0 b4 42 6b 3b .).0p.v......Bk; 00:22:29.521 000002b0 e9 48 e3 fb 5a 8a 1e fd 5f 9f 1c b2 39 ee 5d 64 .H..Z..._...9.]d 00:22:29.521 000002c0 35 b8 47 d5 3b 01 ad e5 d7 2a c8 43 8b ff 0a 3d 5.G.;....*.C...= 00:22:29.521 000002d0 d8 81 8a 24 8b 4e 82 2e 3d fb 01 8c 15 46 e4 9a ...$.N..=....F.. 00:22:29.521 000002e0 4d 04 d4 cf 1f 2f 0f 5f 94 0b 49 17 bc d0 53 7f M..../._..I...S. 00:22:29.521 000002f0 62 e0 ac 06 3f e4 f1 fd 15 72 8a a5 42 00 58 0f b...?....r..B.X. 00:22:29.521 00000300 fd d0 60 f0 ab 92 f1 f3 c8 2f 1e b3 c2 18 14 79 ..`....../.....y 00:22:29.521 00000310 b1 c3 77 a1 18 83 5c bb f1 a8 10 c5 aa 18 da 40 ..w...\........@ 00:22:29.521 00000320 cc c4 d7 c1 21 93 06 6e 69 55 f2 94 8a e2 97 97 ....!..niU...... 00:22:29.521 00000330 c1 5a 8f 74 37 06 77 62 f1 fc 65 82 08 8a 99 08 .Z.t7.wb..e..... 00:22:29.521 00000340 b3 8b 5a 48 54 2f 48 5f ef d4 ab ea 27 9e af f9 ..ZHT/H_....'... 00:22:29.521 00000350 f3 bd 8b 6a ed 9b 9a fd 24 26 24 f2 1b dc bd e9 ...j....$&$..... 00:22:29.521 00000360 6a 4a c8 fe 49 0e 5e f3 92 0a d6 63 20 32 33 6a jJ..I.^....c 23j 00:22:29.521 00000370 c2 e0 6a b5 ec c5 21 50 ae 3c 31 7b f2 11 df da ..j...!P.<1{.... 00:22:29.521 00000380 f6 14 58 bb 95 d9 06 73 d6 2e 69 91 d4 45 b9 83 ..X....s..i..E.. 00:22:29.521 00000390 72 14 b4 34 7e 3a df 4a 13 a3 85 37 22 73 53 76 r..4~:.J...7"sSv 00:22:29.521 000003a0 e9 ba b8 24 2d 4f fb 03 3e 3e f8 bb 10 03 1b 6e ...$-O..>>.....n 00:22:29.521 000003b0 8e 5a 04 e0 18 47 7d 55 85 c0 36 d9 83 30 d0 4f .Z...G}U..6..0.O 00:22:29.521 000003c0 17 b8 04 4c 37 5c d8 ab b7 eb 22 84 bf 6c e0 8d ...L7\...."..l.. 00:22:29.521 000003d0 bd ac f3 1e bf 2e 37 7b 12 fb 1e 42 fc dd a6 14 ......7{...B.... 00:22:29.521 000003e0 0f 24 65 e5 08 ea 71 81 2f 91 46 82 e2 36 0c 63 .$e...q./.F..6.c 00:22:29.521 000003f0 3f 8f fb e6 fb c6 64 97 ae 2c 54 68 31 f8 b3 fd ?.....d..,Th1... 00:22:29.521 dh secret: 00:22:29.521 00000000 94 cb bc 49 8d ee b8 95 26 c1 fa a8 75 16 94 f5 ...I....&...u... 00:22:29.521 00000010 5a 49 41 b8 f7 39 9a b5 9c 7d 6e c2 7a 7c 93 bf ZIA..9...}n.z|.. 00:22:29.521 00000020 4e 43 7e 07 38 41 7d 77 95 5e 6a 2a 88 27 67 60 NC~.8A}w.^j*.'g` 00:22:29.521 00000030 00 e3 87 9a ea f6 a0 3e 85 cc f6 a9 34 94 ad d9 .......>....4... 00:22:29.521 00000040 6a 3b 65 3d a4 bf ad 0f bd 08 5b c0 ca 39 99 ef j;e=......[..9.. 00:22:29.521 00000050 2d e0 75 27 ef 10 4c 9d df 93 f7 6a f3 78 68 8a -.u'..L....j.xh. 00:22:29.521 00000060 bf d7 e9 6e 0e 7c f1 bf fd e4 50 d0 ee fa ca 77 ...n.|....P....w 00:22:29.521 00000070 03 36 62 54 2d d1 a5 4b 2b 27 59 92 cc 70 d3 e9 .6bT-..K+'Y..p.. 00:22:29.521 00000080 f8 1f 15 32 89 0e 1d fb c0 4f 13 02 d7 00 f2 36 ...2.....O.....6 00:22:29.521 00000090 27 34 cd 63 5d 59 97 95 b9 2c 80 8c 9d 00 ce de '4.c]Y...,...... 00:22:29.521 000000a0 72 8f 28 41 e4 80 80 eb 77 c8 5e 5d d3 db d7 a1 r.(A....w.^].... 00:22:29.521 000000b0 10 ac 54 44 1e 4a 8a e9 6d 3c 2a d3 42 3b a8 36 ..TD.J..m<*.B;.6 00:22:29.521 000000c0 bb b9 20 cb 3f cd b8 ab 19 95 f5 3c 31 e7 08 83 .. .?......<1... 00:22:29.521 000000d0 1e ea 44 02 64 56 81 7b 5f c5 1f 53 24 b1 4b 92 ..D.dV.{_..S$.K. 00:22:29.521 000000e0 de 3e 60 e3 5e 1e fc d2 fb 7c 04 cc 24 0a d5 d8 .>`.^....|..$... 00:22:29.521 000000f0 78 a8 3a a9 a7 4f 10 a0 78 73 10 d4 00 fd 2a 92 x.:..O..xs....*. 00:22:29.521 00000100 c0 9e 99 a2 71 d9 c2 5f 17 e4 36 ee 26 49 8d 1a ....q.._..6.&I.. 00:22:29.521 00000110 cc da af d6 79 62 a4 e6 81 ad 39 2c 2d 86 1d 8e ....yb....9,-... 00:22:29.521 00000120 e0 2d e0 3a 41 3d 38 8d 74 5f 0b 77 26 b6 77 91 .-.:A=8.t_.w&.w. 00:22:29.521 00000130 57 2a d6 8a 99 0c ba 68 0e cf d0 d3 86 e8 50 e5 W*.....h......P. 00:22:29.521 00000140 cf ad 72 69 5b 57 6a 04 d7 50 ce 0c ab 2a b7 24 ..ri[Wj..P...*.$ 00:22:29.521 00000150 c9 68 09 74 46 33 1e 9f f5 60 c1 9e 6e 07 c5 c5 .h.tF3...`..n... 00:22:29.521 00000160 1e 49 d4 24 f1 ac e7 6e b8 25 a7 e0 30 13 b9 2f .I.$...n.%..0../ 00:22:29.521 00000170 69 06 c3 c5 12 92 7b 8e db a2 cb a0 80 7c 54 c9 i.....{......|T. 00:22:29.521 00000180 47 71 f9 f5 57 ad 13 25 2c 7c a5 d8 21 86 63 5e Gq..W..%,|..!.c^ 00:22:29.521 00000190 b8 02 81 ad 5e 1f b0 0e b1 48 2c d3 c5 57 02 90 ....^....H,..W.. 00:22:29.521 000001a0 82 b9 8e 17 d5 7d 03 0b 16 0a ea 9e 99 22 0e 00 .....}.......".. 00:22:29.521 000001b0 3c c3 64 cf bd 88 61 63 f1 9c f1 6a 63 85 ed 1d <.d...ac...jc... 00:22:29.521 000001c0 10 42 13 e8 46 ec f3 6f 8a 1b f5 fd da 2f c0 ff .B..F..o...../.. 00:22:29.521 000001d0 4e 60 70 b0 67 d8 56 06 a9 29 12 34 8e 89 89 d9 N`p.g.V..).4.... 00:22:29.521 000001e0 a5 22 46 06 c9 8d dc f4 da 14 04 e6 94 8d 8d 4c ."F............L 00:22:29.521 000001f0 be 80 68 6d de 55 02 b5 5e f2 7b 80 03 eb 2b ce ..hm.U..^.{...+. 00:22:29.521 00000200 6f e8 d2 86 ca 7a e2 74 78 da 64 8b 90 04 67 14 o....z.tx.d...g. 00:22:29.521 00000210 46 85 78 da 60 d1 8d 6e 81 ff f1 e2 5c d6 ea d3 F.x.`..n....\... 00:22:29.521 00000220 71 26 49 1b b7 b8 f5 34 05 a1 03 f0 cc e2 1e bc q&I....4........ 00:22:29.521 00000230 c4 7e f5 45 01 c4 99 98 86 a3 e4 37 60 9e 67 1f .~.E.......7`.g. 00:22:29.521 00000240 f5 3b 0a 21 95 b6 ff db a6 79 b4 26 55 5f 4c a6 .;.!.....y.&U_L. 00:22:29.521 00000250 06 4e 54 97 38 c5 d7 3d 5e d5 3c 61 12 4c 10 29 .NT.8..=^.. 00:22:29.521 000002a0 f4 91 44 b6 9b 69 69 4a ca e8 1b 5a 5e 4c ef 42 ..D..iiJ...Z^L.B 00:22:29.521 000002b0 d7 57 f9 be 89 d7 02 d4 a5 3e 22 de c1 af 30 6e .W.......>"...0n 00:22:29.521 000002c0 04 97 b3 cc 99 95 9a f9 f9 47 a3 a9 06 24 eb 2a .........G...$.* 00:22:29.521 000002d0 38 1b ca 15 65 39 7b c2 ba 73 70 0f 4f 9c 3b 3c 8...e9{..sp.O.;< 00:22:29.521 000002e0 2f 01 bd 4d 44 fe bc a7 f0 c1 a9 87 df b8 fc 46 /..MD..........F 00:22:29.521 000002f0 b1 90 1a 25 57 ef 5f 8f be 0b fe 94 a7 bd 79 ff ...%W._.......y. 00:22:29.521 00000300 be ff c0 c5 9b 9e 9a 66 bc f3 3d f9 8c 06 70 ad .......f..=...p. 00:22:29.521 00000310 37 15 d8 bf 48 3e 69 3f 99 7e 05 f6 df 26 54 50 7...H>i?.~...&TP 00:22:29.521 00000320 c5 a4 52 bd cb 37 c3 36 31 b1 73 a9 02 d9 42 87 ..R..7.61.s...B. 00:22:29.521 00000330 2a 80 eb 03 7d df c2 4e 6e 7c 3a 8d 9a ba ba 80 *...}..Nn|:..... 00:22:29.521 00000340 ba d0 73 73 40 32 39 4e d2 95 32 7d 69 66 67 a1 ..ss@29N..2}ifg. 00:22:29.521 00000350 86 1e e7 35 05 20 33 60 76 49 9a 6b 38 a9 b2 61 ...5. 3`vI.k8..a 00:22:29.521 00000360 20 c1 b1 bb f8 35 5a 5b 22 f6 63 9f 58 1f 45 cd ....5Z[".c.X.E. 00:22:29.521 00000370 b9 a5 a9 78 41 b6 c8 20 5d 41 bd eb e1 db bc 01 ...xA.. ]A...... 00:22:29.521 00000380 1e 7a 64 47 e0 b5 93 57 e7 28 6a 71 c8 ea 79 ab .zdG...W.(jq..y. 00:22:29.521 00000390 0b ed 49 58 6e f7 11 01 07 2b 03 51 60 d5 3e 4f ..IXn....+.Q`.>O 00:22:29.521 000003a0 e8 30 14 21 42 1b 53 98 6c 2f 7b 1f 8e ec 76 db .0.!B.S.l/{...v. 00:22:29.521 000003b0 d5 3f 2a 21 f6 6c d8 43 fa 47 1a 61 3f 28 c8 ae .?*!.l.C.G.a?(.. 00:22:29.521 000003c0 b4 2b 20 f5 82 fb 5d 34 ea e1 47 71 34 1b 3e 1a .+ ...]4..Gq4.>. 00:22:29.521 000003d0 f3 4f 0b e7 3e 27 21 e0 24 8b a2 63 1c 28 64 af .O..>'!.$..c.(d. 00:22:29.521 000003e0 b1 5e 07 8c 13 d8 4d 34 a8 5d 4c e9 c8 dc d4 af .^....M4.]L..... 00:22:29.521 000003f0 b9 c4 1b 4d 1f d9 1d 78 df e3 35 dc 10 c0 18 0b ...M...x..5..... 00:22:29.521 [2024-09-27 13:27:18.998465] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key1, hash=2, dhgroup=5, seq=3775755266, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.521 [2024-09-27 13:27:18.999253] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.521 [2024-09-27 13:27:19.111112] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.521 [2024-09-27 13:27:19.111613] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.521 [2024-09-27 13:27:19.111870] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.521 [2024-09-27 13:27:19.112322] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.521 [2024-09-27 13:27:19.279490] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.521 [2024-09-27 13:27:19.279783] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.521 [2024-09-27 13:27:19.280012] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 5 (ffdhe8192) 00:22:29.521 [2024-09-27 13:27:19.280205] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.521 [2024-09-27 13:27:19.280578] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.521 ctrlr pubkey: 00:22:29.521 00000000 1c 51 09 ce 63 64 1d 30 50 22 6e d4 23 1b 6a 6f .Q..cd.0P"n.#.jo 00:22:29.521 00000010 26 e3 d4 82 71 bf 4a 21 2a 6a 2e 83 bf a3 1c 82 &...q.J!*j...... 00:22:29.521 00000020 d5 5c 64 bb b8 2a f9 32 d3 76 f0 7a 28 b7 62 f2 .\d..*.2.v.z(.b. 00:22:29.521 00000030 e3 8d 7d e6 45 5b 4f 14 05 90 c2 66 bf 89 f8 20 ..}.E[O....f... 00:22:29.521 00000040 f9 dd 69 34 85 1c 5e 18 cc 0a 91 3c b7 69 92 03 ..i4..^....<.i.. 00:22:29.521 00000050 9d 07 fc 6b 35 68 b8 5d 66 d1 e3 fd 33 66 63 9d ...k5h.]f...3fc. 00:22:29.521 00000060 1a 9a 22 56 d0 f3 7d 89 77 94 50 8a ce 2f 8d ad .."V..}.w.P../.. 00:22:29.521 00000070 d5 dd f7 58 7c ae be c0 6c 6c 98 84 23 85 fa 75 ...X|...ll..#..u 00:22:29.521 00000080 ca 7c 17 be 02 dc a9 c7 30 ce 21 9d 28 1e 05 e5 .|......0.!.(... 00:22:29.521 00000090 3e 36 92 d8 6e 96 64 60 15 ad 06 e9 06 77 5b 57 >6..n.d`.....w[W 00:22:29.521 000000a0 95 7f 0f 8e 8b ca 34 95 f8 51 9e cd fc 97 60 35 ......4..Q....`5 00:22:29.521 000000b0 b4 fd 15 13 93 4c 9a 76 9a f2 0c 2a 19 7f 1f 6c .....L.v...*...l 00:22:29.521 000000c0 fa a3 67 c1 7e 9e f5 0e c4 6a b3 b1 2a aa 3c 3b ..g.~....j..*.<; 00:22:29.521 000000d0 4f 21 45 03 cf 13 cc 49 d9 55 7d e2 13 0b 40 1f O!E....I.U}...@. 00:22:29.521 000000e0 a5 be 17 b2 78 28 b6 80 6e 5b f8 f6 67 49 6e 5c ....x(..n[..gIn\ 00:22:29.521 000000f0 66 d2 2f b7 2c 15 b8 38 31 c8 4e f7 7b 68 25 d0 f./.,..81.N.{h%. 00:22:29.521 00000100 c9 3c c7 48 76 e9 c2 e5 f3 02 a6 d3 a2 d4 54 1f .<.Hv.........T. 00:22:29.521 00000110 02 62 f4 c1 5c 77 b6 5a 5e 54 d1 35 a2 46 3d b5 .b..\w.Z^T.5.F=. 00:22:29.521 00000120 d8 c4 36 d3 0a fd ab 54 8a bc ba 6e 66 92 72 63 ..6....T...nf.rc 00:22:29.521 00000130 c8 91 a2 05 e5 e7 ac a0 4a cb 9d 02 55 39 f8 db ........J...U9.. 00:22:29.521 00000140 79 39 6f ab 29 30 a6 31 93 a2 34 d1 00 b6 15 15 y9o.)0.1..4..... 00:22:29.521 00000150 f7 3b 26 b5 bd ff ea 5d 7c 72 c7 1f 3e 04 29 9b .;&....]|r..>.). 00:22:29.521 00000160 1d f2 36 ff 0a 35 85 36 da f1 84 e5 bf c8 3c 1b ..6..5.6......<. 00:22:29.521 00000170 ea 07 19 b8 10 f0 17 1d 0f fd fe e1 e4 95 14 3f ...............? 00:22:29.521 00000180 2a 85 0d 7c d8 e1 80 68 28 3d 4d ab 23 06 e6 a3 *..|...h(=M.#... 00:22:29.521 00000190 ae 6b 33 22 0e a6 05 77 7c bd 88 27 35 bf fc ad .k3"...w|..'5... 00:22:29.521 000001a0 11 d6 ec 4d 40 20 35 52 2b 39 f3 ea a1 d1 24 23 ...M@ 5R+9....$# 00:22:29.521 000001b0 92 c1 45 b1 84 d7 50 ea 9f 51 96 9a c4 7d 87 e7 ..E...P..Q...}.. 00:22:29.521 000001c0 86 7b 30 9f fd cd c5 f8 34 dc 35 82 5a 7d b3 2a .{0.....4.5.Z}.* 00:22:29.521 000001d0 21 88 dd 8d 0a 6f d2 c9 62 60 4e e6 3d ef 98 dd !....o..b`N.=... 00:22:29.521 000001e0 49 67 0a 29 e8 d1 4f 89 5e 0d c4 14 4e 45 21 80 Ig.)..O.^...NE!. 00:22:29.521 000001f0 a4 a4 94 23 89 f6 b3 2a 6b 25 fd 4c 82 fe 02 c7 ...#...*k%.L.... 00:22:29.522 00000200 4a 2f f1 88 66 04 06 8e 9c 4b c9 98 0e de 62 7a J/..f....K....bz 00:22:29.522 00000210 79 c2 a3 03 be 14 d4 44 88 e1 3a 6e 00 d7 58 f7 y......D..:n..X. 00:22:29.522 00000220 5c 9d 68 4d 10 75 22 12 f1 ff 1a 26 51 d3 89 c5 \.hM.u"....&Q... 00:22:29.522 00000230 fe 11 d2 d3 6d bc 10 8d c3 9f c1 84 44 e5 c2 35 ....m.......D..5 00:22:29.522 00000240 1b 4d 6a 43 07 da 38 d1 ce 16 49 86 bf a3 06 63 .MjC..8...I....c 00:22:29.522 00000250 07 d1 0f 3e b2 94 7e 16 bd 98 e0 9d e7 3e 95 72 ...>..~......>.r 00:22:29.522 00000260 94 28 4a f4 96 75 75 4a 35 29 bb 30 96 97 a7 60 .(J..uuJ5).0...` 00:22:29.522 00000270 d0 18 25 56 42 52 19 bc 7d 31 e7 ec 66 02 4f fa ..%VBR..}1..f.O. 00:22:29.522 00000280 49 c7 9a c8 29 d9 8a 29 86 32 3f a4 2c 42 71 c0 I...)..).2?.,Bq. 00:22:29.522 00000290 bd 76 50 cf a2 b6 6f 42 4e 3c 7f 6f 2d 60 df 3d .vP...oBN<.o-`.= 00:22:29.522 000002a0 9d ca 10 88 16 5b c9 1d a9 6a 74 9a e5 a1 97 ee .....[...jt..... 00:22:29.522 000002b0 7e c2 03 2d 4f 47 36 20 c0 06 88 6b e3 8c e3 c2 ~..-OG6 ...k.... 00:22:29.522 000002c0 1c bf 46 bc fa 87 11 79 6d a5 f7 58 88 a5 ac 48 ..F....ym..X...H 00:22:29.522 000002d0 46 69 f1 39 a2 5a 3e 7a f3 84 4e 68 b6 39 b1 5f Fi.9.Z>z..Nh.9._ 00:22:29.522 000002e0 cb c9 5e ae 06 9b 02 12 06 1a 8a 93 93 fd 43 65 ..^...........Ce 00:22:29.522 000002f0 ac f9 24 b6 19 70 2b 19 5f be 84 54 84 0a 14 bc ..$..p+._..T.... 00:22:29.522 00000300 2a 32 75 53 c2 d0 7e 14 c0 60 24 bc 49 24 6b e5 *2uS..~..`$.I$k. 00:22:29.522 00000310 81 cb f2 69 26 31 62 64 29 a7 ec c4 78 72 d2 66 ...i&1bd)...xr.f 00:22:29.522 00000320 23 6a 9a db 8f 17 23 68 37 03 0a 84 4e 6e dd 60 #j....#h7...Nn.` 00:22:29.522 00000330 37 80 7a bd 4c af 38 76 80 0a b4 03 b9 40 25 56 7.z.L.8v.....@%V 00:22:29.522 00000340 ed 87 f7 19 fb f9 fc 6b 6b f0 a4 6e 9e 34 1a 5d .......kk..n.4.] 00:22:29.522 00000350 f6 de b2 9c 0e 13 57 cc de 4e d5 00 d2 f3 f2 af ......W..N...... 00:22:29.522 00000360 de 40 43 f1 b1 df 30 33 bd c3 4c e7 42 9f 88 69 .@C...03..L.B..i 00:22:29.522 00000370 48 bd 59 7b b6 31 0d 52 72 9e d1 2d 0e 82 b8 6e H.Y{.1.Rr..-...n 00:22:29.522 00000380 bd 55 8d ae 2e b1 4b c7 12 af 2b 9f 94 b5 53 c8 .U....K...+...S. 00:22:29.522 00000390 92 f5 17 70 33 2a 1a 09 3c 13 2a 2f 2e 47 06 68 ...p3*..<.*/.G.h 00:22:29.522 000003a0 ab 7f d9 94 a1 4a 6b f6 15 d2 b4 37 db 09 aa 45 .....Jk....7...E 00:22:29.522 000003b0 0a 7f be 93 5e 9b 3a 6e 6c 67 64 3a bc bd 65 c4 ....^.:nlgd:..e. 00:22:29.522 000003c0 60 4c ef 26 2d 30 12 05 46 60 5f dc 60 73 c7 a8 `L.&-0..F`_.`s.. 00:22:29.522 000003d0 5a 95 ad 07 ae 6f 7d ee 76 3e 7b 77 19 52 53 18 Z....o}.v>{w.RS. 00:22:29.522 000003e0 0c fc f6 8f 31 c3 5b 2c 7a e1 50 17 d9 67 a7 3f ....1.[,z.P..g.? 00:22:29.522 000003f0 51 11 b8 b3 c6 27 06 0a 8b 2c d4 24 19 88 c8 be Q....'...,.$.... 00:22:29.522 host pubkey: 00:22:29.522 00000000 82 5d d0 02 3e 7b 34 ab 2a 87 5e 6a 2d ce a9 f0 .]..>{4.*.^j-... 00:22:29.522 00000010 ad 10 f3 ef 47 2f ad 8a 6c 23 af 20 c0 3b d5 23 ....G/..l#. .;.# 00:22:29.522 00000020 22 2a ad a1 6d 7a d5 82 10 2e 30 22 7a 22 8c 31 "*..mz....0"z".1 00:22:29.522 00000030 3b 72 29 61 75 4c 4d 37 60 ca 75 6f 3c 06 4e a3 ;r)auLM7`.uo<.N. 00:22:29.522 00000040 12 67 a3 ef 6f 2e fb 53 ac 92 ed 4c 09 0a 13 13 .g..o..S...L.... 00:22:29.522 00000050 e8 4d dc 1c cd d1 29 7b 0d 39 ee e6 56 e1 9b a5 .M....){.9..V... 00:22:29.522 00000060 d2 a3 79 0d 6f 01 69 fa 74 6d 9a c9 fd c1 0f ba ..y.o.i.tm...... 00:22:29.522 00000070 7b 78 6c 36 47 3b ed 22 8b f2 73 09 b1 ba c6 a9 {xl6G;."..s..... 00:22:29.522 00000080 f8 0e 26 b2 73 0d a8 27 ae ba cc 56 d5 dd be b9 ..&.s..'...V.... 00:22:29.522 00000090 93 f4 87 6d 17 ef 7c 05 ba bd d9 ab 87 e5 ed d7 ...m..|......... 00:22:29.522 000000a0 a9 e7 95 e4 ed c4 9d 92 c3 b6 03 3c 68 9b 92 e6 ............4.p..s.13.x. 00:22:29.522 00000180 37 04 08 41 8c 14 96 dc b8 e7 b8 4a b4 0d e2 a4 7..A.......J.... 00:22:29.522 00000190 92 60 3a 61 9a 75 48 2e 55 63 86 d4 a1 45 05 50 .`:a.uH.Uc...E.P 00:22:29.522 000001a0 cc 98 18 14 a8 25 1d a8 0e aa ff 13 d3 16 cd 72 .....%.........r 00:22:29.522 000001b0 7a 44 a6 d2 0d 4c 7c 83 03 74 bc ed 04 e6 ed ff zD...L|..t...... 00:22:29.522 000001c0 f8 75 ba 55 b2 60 9c 19 d4 f1 2c 2e 89 dd 71 cc .u.U.`....,...q. 00:22:29.522 000001d0 fa 49 87 f2 84 8a 0b 7a 61 7e fb 84 8f 3f 3c 68 .I.....za~...?.. 00:22:29.522 00000040 2c a2 99 fb 00 fb a7 41 a0 17 01 f8 94 65 03 e4 ,......A.....e.. 00:22:29.522 00000050 c4 28 a0 25 a4 39 4e c2 53 e7 31 2f 22 36 e7 0b .(.%.9N.S.1/"6.. 00:22:29.522 00000060 96 f5 37 2d 70 df 77 0d 3a 2f 22 4a 33 4a 31 1e ..7-p.w.:/"J3J1. 00:22:29.522 00000070 14 00 bb 3c 86 ad 72 c2 bb 8c 99 14 8c 49 73 9f ...<..r......Is. 00:22:29.522 00000080 e0 92 48 91 86 11 3c c9 df 25 fd dc 51 78 4d 85 ..H...<..%..QxM. 00:22:29.522 00000090 b0 5e 03 ea d9 de 8f e1 aa e8 7f 95 b8 96 e4 9e .^.............. 00:22:29.522 000000a0 d8 82 a9 56 b6 ad 5a 57 5e e9 12 8f 34 04 69 b1 ...V..ZW^...4.i. 00:22:29.522 000000b0 e3 3c c9 d9 49 7a 56 74 9f 36 54 db de 43 97 f9 .<..IzVt.6T..C.. 00:22:29.522 000000c0 e2 db 10 6e 49 b5 ca 49 7d 19 f3 bb 23 0a 2a 59 ...nI..I}...#.*Y 00:22:29.522 000000d0 d2 0b 4c 64 b7 b4 11 09 f6 36 db ad 50 7f 9b 4b ..Ld.....6..P..K 00:22:29.522 000000e0 3c df b2 16 31 04 c3 f2 b7 a6 3b 82 ac 6f 69 e8 <...1.....;..oi. 00:22:29.522 000000f0 d7 86 77 18 38 f4 d4 33 71 ae 96 b6 0b 6d 18 a7 ..w.8..3q....m.. 00:22:29.522 00000100 a5 09 04 b4 45 79 74 f9 6b 82 4b 8c e4 2b 6d e8 ....Eyt.k.K..+m. 00:22:29.522 00000110 4c d2 b1 ac 49 95 50 84 33 90 44 5b 9e 08 cd eb L...I.P.3.D[.... 00:22:29.522 00000120 4c 09 33 53 11 b0 f8 21 cc 54 69 1c 41 aa 5a 10 L.3S...!.Ti.A.Z. 00:22:29.522 00000130 76 5e fe 6c f0 de 3d 02 b3 dc 97 72 1d db 7d d6 v^.l..=....r..}. 00:22:29.522 00000140 ad 7d 8c 2a f1 a4 8d 80 89 e5 69 9f b1 52 af c7 .}.*......i..R.. 00:22:29.522 00000150 3e 63 23 8f 52 10 d7 e0 c1 fc e6 79 39 5e f0 df >c#.R......y9^.. 00:22:29.522 00000160 85 47 1a c5 bc 3f 47 8a 8d 2b dd 67 0e 0e 96 cf .G...?G..+.g.... 00:22:29.522 00000170 5a 0e aa 48 bc 09 a9 30 b7 e6 fc d9 d0 f1 5e 4c Z..H...0......^L 00:22:29.522 00000180 20 75 5f b2 1d 56 fc d5 de 21 83 b9 67 26 b0 c0 u_..V...!..g&.. 00:22:29.522 00000190 6e eb 26 a3 a7 4f dc 07 12 5c ee 9e 71 64 5b 0e n.&..O...\..qd[. 00:22:29.522 000001a0 89 b0 0b a1 7c 2d 86 5c 44 c2 ec be 54 a1 43 ed ....|-.\D...T.C. 00:22:29.522 000001b0 95 f5 3b bd d6 5f 9e 4e 24 25 fb de 55 91 25 63 ..;.._.N$%..U.%c 00:22:29.522 000001c0 ca db 41 cb c5 11 bf fd 46 4f 6e 92 0f d8 a9 cd ..A.....FOn..... 00:22:29.522 000001d0 3f 6f 1e 88 5a 9b f0 f2 47 e9 7c 29 7e 16 62 9c ?o..Z...G.|)~.b. 00:22:29.522 000001e0 ed 64 0c 2d 2e aa 37 5d 08 49 1a b3 93 07 83 c8 .d.-..7].I...... 00:22:29.522 000001f0 5c dd 6d 5f 36 2f db 9f 2b 0c 7c 63 a2 9a 3d 2b \.m_6/..+.|c..=+ 00:22:29.522 00000200 d7 6b 69 88 3c 78 66 82 4d b4 28 31 d2 38 9b e6 .ki...3..8_R..a! 00:22:29.523 000002d0 e6 fd 0b bb 70 4e d2 48 f4 39 c7 09 39 70 84 c6 ....pN.H.9..9p.. 00:22:29.523 000002e0 9b 98 af f3 dc f1 3c 97 6d 50 1c a0 6e 4f 79 be ......<.mP..nOy. 00:22:29.523 000002f0 4e 75 8b 02 17 f1 ff 1e a0 97 a3 0e e1 b0 26 87 Nu............&. 00:22:29.523 00000300 24 6a 14 71 7c 0d ab 08 3d 61 05 f1 ea e6 8e 0a $j.q|...=a...... 00:22:29.523 00000310 0a 70 96 8c dc 5b 9c 1f 84 9e bf 7f 84 41 54 04 .p...[.......AT. 00:22:29.523 00000320 ce a2 38 9c 65 81 dd 69 65 2b c8 9d a2 6e e7 10 ..8.e..ie+...n.. 00:22:29.523 00000330 9f 70 a3 a3 b4 37 be bf e4 88 cf 68 be fc 5b 10 .p...7.....h..[. 00:22:29.523 00000340 27 31 88 a5 b2 2a e4 e2 b4 76 dc 49 ef c7 22 83 '1...*...v.I..". 00:22:29.523 00000350 7c 91 fb 23 51 37 e0 d2 5e ac 2c 80 f8 e2 74 14 |..#Q7..^.,...t. 00:22:29.523 00000360 20 25 e6 5b 98 e2 e3 1d 53 33 03 c0 04 ac 05 78 %.[....S3.....x 00:22:29.523 00000370 aa e8 21 42 de fe 08 f9 8d 23 ac 65 46 bd 34 28 ..!B.....#.eF.4( 00:22:29.523 00000380 8c 84 94 ef 0f ac 66 e0 56 ab 89 4e 9f 84 21 39 ......f.V..N..!9 00:22:29.523 00000390 1d a5 f1 87 16 08 a6 80 6f 90 b1 99 bb d9 87 a5 ........o....... 00:22:29.523 000003a0 d2 7b fc 05 05 ee f0 54 b0 01 f9 31 8c cc cb 4a .{.....T...1...J 00:22:29.523 000003b0 54 db 85 8f 7f 84 d1 d3 a5 de 83 f0 20 62 af 98 T........... b.. 00:22:29.523 000003c0 12 80 b4 05 95 a0 cd fd 6c a3 98 b4 70 b0 95 6c ........l...p..l 00:22:29.523 000003d0 ea 5b 07 f7 d0 a2 9e 0c db 42 f6 81 8d d8 26 73 .[.......B....&s 00:22:29.523 000003e0 90 28 9c 2b 1e ab ab 51 fc 8b af 62 25 f8 66 fe .(.+...Q...b%.f. 00:22:29.523 000003f0 f7 c2 15 af be c0 c8 19 ed 1d 25 93 2b dc 2f e3 ..........%.+./. 00:22:29.523 [2024-09-27 13:27:19.476843] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=2, dhgroup=5, seq=3775755267, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.523 [2024-09-27 13:27:19.477216] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.523 [2024-09-27 13:27:19.561493] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.523 [2024-09-27 13:27:19.561911] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.523 [2024-09-27 13:27:19.562100] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.523 [2024-09-27 13:27:19.562309] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.523 [2024-09-27 13:27:19.614355] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.523 [2024-09-27 13:27:19.614650] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.523 [2024-09-27 13:27:19.614788] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 5 (ffdhe8192) 00:22:29.523 [2024-09-27 13:27:19.614966] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.523 [2024-09-27 13:27:19.615223] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.523 ctrlr pubkey: 00:22:29.523 00000000 1c 51 09 ce 63 64 1d 30 50 22 6e d4 23 1b 6a 6f .Q..cd.0P"n.#.jo 00:22:29.523 00000010 26 e3 d4 82 71 bf 4a 21 2a 6a 2e 83 bf a3 1c 82 &...q.J!*j...... 00:22:29.523 00000020 d5 5c 64 bb b8 2a f9 32 d3 76 f0 7a 28 b7 62 f2 .\d..*.2.v.z(.b. 00:22:29.523 00000030 e3 8d 7d e6 45 5b 4f 14 05 90 c2 66 bf 89 f8 20 ..}.E[O....f... 00:22:29.523 00000040 f9 dd 69 34 85 1c 5e 18 cc 0a 91 3c b7 69 92 03 ..i4..^....<.i.. 00:22:29.523 00000050 9d 07 fc 6b 35 68 b8 5d 66 d1 e3 fd 33 66 63 9d ...k5h.]f...3fc. 00:22:29.523 00000060 1a 9a 22 56 d0 f3 7d 89 77 94 50 8a ce 2f 8d ad .."V..}.w.P../.. 00:22:29.523 00000070 d5 dd f7 58 7c ae be c0 6c 6c 98 84 23 85 fa 75 ...X|...ll..#..u 00:22:29.523 00000080 ca 7c 17 be 02 dc a9 c7 30 ce 21 9d 28 1e 05 e5 .|......0.!.(... 00:22:29.523 00000090 3e 36 92 d8 6e 96 64 60 15 ad 06 e9 06 77 5b 57 >6..n.d`.....w[W 00:22:29.523 000000a0 95 7f 0f 8e 8b ca 34 95 f8 51 9e cd fc 97 60 35 ......4..Q....`5 00:22:29.523 000000b0 b4 fd 15 13 93 4c 9a 76 9a f2 0c 2a 19 7f 1f 6c .....L.v...*...l 00:22:29.523 000000c0 fa a3 67 c1 7e 9e f5 0e c4 6a b3 b1 2a aa 3c 3b ..g.~....j..*.<; 00:22:29.523 000000d0 4f 21 45 03 cf 13 cc 49 d9 55 7d e2 13 0b 40 1f O!E....I.U}...@. 00:22:29.523 000000e0 a5 be 17 b2 78 28 b6 80 6e 5b f8 f6 67 49 6e 5c ....x(..n[..gIn\ 00:22:29.523 000000f0 66 d2 2f b7 2c 15 b8 38 31 c8 4e f7 7b 68 25 d0 f./.,..81.N.{h%. 00:22:29.523 00000100 c9 3c c7 48 76 e9 c2 e5 f3 02 a6 d3 a2 d4 54 1f .<.Hv.........T. 00:22:29.523 00000110 02 62 f4 c1 5c 77 b6 5a 5e 54 d1 35 a2 46 3d b5 .b..\w.Z^T.5.F=. 00:22:29.523 00000120 d8 c4 36 d3 0a fd ab 54 8a bc ba 6e 66 92 72 63 ..6....T...nf.rc 00:22:29.523 00000130 c8 91 a2 05 e5 e7 ac a0 4a cb 9d 02 55 39 f8 db ........J...U9.. 00:22:29.523 00000140 79 39 6f ab 29 30 a6 31 93 a2 34 d1 00 b6 15 15 y9o.)0.1..4..... 00:22:29.523 00000150 f7 3b 26 b5 bd ff ea 5d 7c 72 c7 1f 3e 04 29 9b .;&....]|r..>.). 00:22:29.523 00000160 1d f2 36 ff 0a 35 85 36 da f1 84 e5 bf c8 3c 1b ..6..5.6......<. 00:22:29.523 00000170 ea 07 19 b8 10 f0 17 1d 0f fd fe e1 e4 95 14 3f ...............? 00:22:29.523 00000180 2a 85 0d 7c d8 e1 80 68 28 3d 4d ab 23 06 e6 a3 *..|...h(=M.#... 00:22:29.523 00000190 ae 6b 33 22 0e a6 05 77 7c bd 88 27 35 bf fc ad .k3"...w|..'5... 00:22:29.523 000001a0 11 d6 ec 4d 40 20 35 52 2b 39 f3 ea a1 d1 24 23 ...M@ 5R+9....$# 00:22:29.523 000001b0 92 c1 45 b1 84 d7 50 ea 9f 51 96 9a c4 7d 87 e7 ..E...P..Q...}.. 00:22:29.523 000001c0 86 7b 30 9f fd cd c5 f8 34 dc 35 82 5a 7d b3 2a .{0.....4.5.Z}.* 00:22:29.523 000001d0 21 88 dd 8d 0a 6f d2 c9 62 60 4e e6 3d ef 98 dd !....o..b`N.=... 00:22:29.523 000001e0 49 67 0a 29 e8 d1 4f 89 5e 0d c4 14 4e 45 21 80 Ig.)..O.^...NE!. 00:22:29.523 000001f0 a4 a4 94 23 89 f6 b3 2a 6b 25 fd 4c 82 fe 02 c7 ...#...*k%.L.... 00:22:29.523 00000200 4a 2f f1 88 66 04 06 8e 9c 4b c9 98 0e de 62 7a J/..f....K....bz 00:22:29.523 00000210 79 c2 a3 03 be 14 d4 44 88 e1 3a 6e 00 d7 58 f7 y......D..:n..X. 00:22:29.523 00000220 5c 9d 68 4d 10 75 22 12 f1 ff 1a 26 51 d3 89 c5 \.hM.u"....&Q... 00:22:29.523 00000230 fe 11 d2 d3 6d bc 10 8d c3 9f c1 84 44 e5 c2 35 ....m.......D..5 00:22:29.523 00000240 1b 4d 6a 43 07 da 38 d1 ce 16 49 86 bf a3 06 63 .MjC..8...I....c 00:22:29.523 00000250 07 d1 0f 3e b2 94 7e 16 bd 98 e0 9d e7 3e 95 72 ...>..~......>.r 00:22:29.523 00000260 94 28 4a f4 96 75 75 4a 35 29 bb 30 96 97 a7 60 .(J..uuJ5).0...` 00:22:29.523 00000270 d0 18 25 56 42 52 19 bc 7d 31 e7 ec 66 02 4f fa ..%VBR..}1..f.O. 00:22:29.523 00000280 49 c7 9a c8 29 d9 8a 29 86 32 3f a4 2c 42 71 c0 I...)..).2?.,Bq. 00:22:29.523 00000290 bd 76 50 cf a2 b6 6f 42 4e 3c 7f 6f 2d 60 df 3d .vP...oBN<.o-`.= 00:22:29.523 000002a0 9d ca 10 88 16 5b c9 1d a9 6a 74 9a e5 a1 97 ee .....[...jt..... 00:22:29.523 000002b0 7e c2 03 2d 4f 47 36 20 c0 06 88 6b e3 8c e3 c2 ~..-OG6 ...k.... 00:22:29.523 000002c0 1c bf 46 bc fa 87 11 79 6d a5 f7 58 88 a5 ac 48 ..F....ym..X...H 00:22:29.523 000002d0 46 69 f1 39 a2 5a 3e 7a f3 84 4e 68 b6 39 b1 5f Fi.9.Z>z..Nh.9._ 00:22:29.523 000002e0 cb c9 5e ae 06 9b 02 12 06 1a 8a 93 93 fd 43 65 ..^...........Ce 00:22:29.523 000002f0 ac f9 24 b6 19 70 2b 19 5f be 84 54 84 0a 14 bc ..$..p+._..T.... 00:22:29.523 00000300 2a 32 75 53 c2 d0 7e 14 c0 60 24 bc 49 24 6b e5 *2uS..~..`$.I$k. 00:22:29.523 00000310 81 cb f2 69 26 31 62 64 29 a7 ec c4 78 72 d2 66 ...i&1bd)...xr.f 00:22:29.523 00000320 23 6a 9a db 8f 17 23 68 37 03 0a 84 4e 6e dd 60 #j....#h7...Nn.` 00:22:29.523 00000330 37 80 7a bd 4c af 38 76 80 0a b4 03 b9 40 25 56 7.z.L.8v.....@%V 00:22:29.523 00000340 ed 87 f7 19 fb f9 fc 6b 6b f0 a4 6e 9e 34 1a 5d .......kk..n.4.] 00:22:29.523 00000350 f6 de b2 9c 0e 13 57 cc de 4e d5 00 d2 f3 f2 af ......W..N...... 00:22:29.523 00000360 de 40 43 f1 b1 df 30 33 bd c3 4c e7 42 9f 88 69 .@C...03..L.B..i 00:22:29.523 00000370 48 bd 59 7b b6 31 0d 52 72 9e d1 2d 0e 82 b8 6e H.Y{.1.Rr..-...n 00:22:29.523 00000380 bd 55 8d ae 2e b1 4b c7 12 af 2b 9f 94 b5 53 c8 .U....K...+...S. 00:22:29.523 00000390 92 f5 17 70 33 2a 1a 09 3c 13 2a 2f 2e 47 06 68 ...p3*..<.*/.G.h 00:22:29.523 000003a0 ab 7f d9 94 a1 4a 6b f6 15 d2 b4 37 db 09 aa 45 .....Jk....7...E 00:22:29.523 000003b0 0a 7f be 93 5e 9b 3a 6e 6c 67 64 3a bc bd 65 c4 ....^.:nlgd:..e. 00:22:29.523 000003c0 60 4c ef 26 2d 30 12 05 46 60 5f dc 60 73 c7 a8 `L.&-0..F`_.`s.. 00:22:29.523 000003d0 5a 95 ad 07 ae 6f 7d ee 76 3e 7b 77 19 52 53 18 Z....o}.v>{w.RS. 00:22:29.523 000003e0 0c fc f6 8f 31 c3 5b 2c 7a e1 50 17 d9 67 a7 3f ....1.[,z.P..g.? 00:22:29.523 000003f0 51 11 b8 b3 c6 27 06 0a 8b 2c d4 24 19 88 c8 be Q....'...,.$.... 00:22:29.523 host pubkey: 00:22:29.523 00000000 67 86 10 29 3f ff 89 4c e2 52 7c 03 bc b1 19 bf g..)?..L.R|..... 00:22:29.523 00000010 c1 64 c1 58 c3 ef 39 fc 5d 36 fd 80 de b0 2a 31 .d.X..9.]6....*1 00:22:29.523 00000020 08 31 5a 10 77 d2 3f 70 22 73 ca f5 13 30 52 7f .1Z.w.?p"s...0R. 00:22:29.523 00000030 8d 13 57 df 83 b9 1c 16 2b 3c fb ab 6a cf 48 4f ..W.....+<..j.HO 00:22:29.523 00000040 84 35 21 76 38 ed f3 1d ac 30 9d c1 a7 4c d5 06 .5!v8....0...L.. 00:22:29.523 00000050 10 3e ab 95 7a 32 29 27 d8 6f 46 3f 48 8b 34 98 .>..z2)'.oF?H.4. 00:22:29.523 00000060 d0 cb e9 43 3f b7 d9 9c 05 74 33 4d 6b 79 cf 63 ...C?....t3Mky.c 00:22:29.523 00000070 60 87 9e ee 15 d9 c7 0c 8b 71 9c cb e1 84 f2 97 `........q...... 00:22:29.523 00000080 de 7d 45 83 b7 ca e4 aa 7f 71 68 e1 f6 a9 bd 54 .}E......qh....T 00:22:29.523 00000090 6d e2 0b 82 0c 72 db df 74 ac 54 95 50 8a 2e 94 m....r..t.T.P... 00:22:29.523 000000a0 17 1f a1 81 2b 07 23 b3 85 b1 09 5a d6 80 29 d1 ....+.#....Z..). 00:22:29.523 000000b0 3b 97 8d 59 26 a0 98 b5 87 cb 2f d2 10 1f 15 ff ;..Y&...../..... 00:22:29.523 000000c0 8e b9 38 a3 8b 95 4b f2 8f 07 fd 9b 35 cd b3 ef ..8...K.....5... 00:22:29.523 000000d0 bd dd f8 20 0a 65 4b be f6 42 64 c4 b3 7d 51 cc ... .eK..Bd..}Q. 00:22:29.523 000000e0 ce ab f7 14 db 97 7c 58 79 70 9d 25 b3 39 5d 05 ......|Xyp.%.9]. 00:22:29.523 000000f0 ab 1c 26 50 ba 26 91 40 4f 89 14 54 4b 9d 84 78 ..&P.&.@O..TK..x 00:22:29.523 00000100 32 48 2e 55 e6 cb e1 6b ed 10 43 4a 1e 54 00 e2 2H.U...k..CJ.T.. 00:22:29.523 00000110 e4 f2 53 10 d1 21 e5 9c a8 af 44 5c a3 3d 6a b8 ..S..!....D\.=j. 00:22:29.523 00000120 db fc 95 fd ce 62 2a a4 b1 cb 4c 7c dd 52 7b 71 .....b*...L|.R{q 00:22:29.523 00000130 ec d7 f2 5b 07 da c5 39 31 25 b0 78 41 52 59 7a ...[...91%.xARYz 00:22:29.523 00000140 1e 7e 9a 3e 2f b9 1b a4 78 2b aa 95 f6 3e 89 2a .~.>/...x+...>.* 00:22:29.523 00000150 cd 82 67 fa dc 6f 05 fc f5 56 9c ae 6e 61 bb c8 ..g..o...V..na.. 00:22:29.523 00000160 59 8e 36 3b ad a9 fc ff 6e 08 f1 a9 ac 0f 90 d1 Y.6;....n....... 00:22:29.523 00000170 cf 61 7b 85 6a cc 97 82 2b b6 9e 27 95 0a 88 98 .a{.j...+..'.... 00:22:29.523 00000180 01 f4 ff 75 b5 a8 0c 70 23 97 dc 5f d4 2c 38 08 ...u...p#.._.,8. 00:22:29.523 00000190 83 5a cc b4 e1 e9 5f 0c 4a 14 8b 13 22 fe ad c4 .Z...._.J..."... 00:22:29.523 000001a0 ab 5c 24 fd c5 23 ad 17 88 cb 7e 0f 25 f5 3b e3 .\$..#....~.%.;. 00:22:29.523 000001b0 1f 5c bf 1d 7a 73 c7 a2 97 29 f0 ae 31 5f 0b 4a .\..zs...)..1_.J 00:22:29.523 000001c0 9c e2 65 a3 da 63 7e 26 2f 8d 01 18 89 9b 62 ad ..e..c~&/.....b. 00:22:29.523 000001d0 72 45 73 ce ae 6b 5b 53 93 9d 7d 45 80 d6 21 69 rEs..k[S..}E..!i 00:22:29.523 000001e0 82 cb f1 5b 20 28 cd 08 4e d0 1e fd cd c0 90 b4 ...[ (..N....... 00:22:29.523 000001f0 53 51 c9 33 77 b0 bd 53 1a b9 a1 94 1e 52 0d ed SQ.3w..S.....R.. 00:22:29.523 00000200 d1 fa 67 41 67 81 a3 89 2a 50 2a 6d b0 08 67 98 ..gAg...*P*m..g. 00:22:29.523 00000210 03 55 c9 c7 96 d3 ae 84 d2 e7 57 c9 b6 af 60 7e .U........W...`~ 00:22:29.523 00000220 77 d8 d1 dc df c6 9b c5 25 65 f6 06 8d 9b d8 7a w.......%e.....z 00:22:29.523 00000230 cc b2 cd f2 38 1d 00 d3 63 b8 72 02 c1 82 40 02 ....8...c.r...@. 00:22:29.523 00000240 18 27 fd 51 e2 42 13 6a d3 83 a9 7b 06 9f 4d 81 .'.Q.B.j...{..M. 00:22:29.523 00000250 b6 6f d1 e1 f1 0f 31 f3 b8 19 d0 6e 61 48 c8 84 .o....1....naH.. 00:22:29.523 00000260 e2 d0 c1 96 92 c8 51 4f 70 e6 70 30 ee 74 40 f8 ......QOp.p0.t@. 00:22:29.523 00000270 80 af 22 06 46 b3 95 70 0f ab 1b 33 a7 9d a4 d4 ..".F..p...3.... 00:22:29.523 00000280 9c ee 29 8d 1b 63 88 34 55 1f 98 c2 d7 d6 32 16 ..)..c.4U.....2. 00:22:29.523 00000290 7a 6e 6d 13 79 17 15 2c de 47 31 6f c9 86 de 64 znm.y..,.G1o...d 00:22:29.523 000002a0 1f a0 ea 61 9c 5d f4 7e 0a 39 7f bf 48 0a 8c 92 ...a.].~.9..H... 00:22:29.523 000002b0 d6 b7 7c d6 6b 00 93 ec 88 79 ee eb 74 a1 b0 a9 ..|.k....y..t... 00:22:29.523 000002c0 11 d9 6c 95 d1 0a 46 d8 12 b0 10 ad da db a6 e9 ..l...F......... 00:22:29.523 000002d0 2f 05 79 05 b0 20 63 b7 60 a4 7e c1 26 4d 0a f5 /.y.. c.`.~.&M.. 00:22:29.523 000002e0 23 a3 bb 1f 72 09 59 aa 7e 83 99 ef 6c b9 b1 c5 #...r.Y.~...l... 00:22:29.524 000002f0 e9 19 ab bb a5 6b d0 97 60 01 9c e8 3b 47 84 2f .....k..`...;G./ 00:22:29.524 00000300 8c b3 e8 2c 4f 79 34 db 40 4a c6 df 56 27 2f d8 ...,Oy4.@J..V'/. 00:22:29.524 00000310 3e 17 97 04 0e 73 96 e3 f5 3d f5 cb 70 34 9a 07 >....s...=..p4.. 00:22:29.524 00000320 7e 32 29 5b 16 3d c9 da 6e 9d 9f 48 5e 5e 93 72 ~2)[.=..n..H^^.r 00:22:29.524 00000330 ac 9b 4c 7c 02 1c 8b 4a 4a 72 94 dd 09 3e 6d 2c ..L|...JJr...>m, 00:22:29.524 00000340 6f 27 e5 1e f6 44 d1 52 5c 20 8a e6 32 ab 4f 25 o'...D.R\ ..2.O% 00:22:29.524 00000350 69 85 45 a8 5f 86 55 90 c6 6b 07 f5 8e da a5 84 i.E._.U..k...... 00:22:29.524 00000360 c6 92 7c f5 a7 39 96 b6 0b 5c 34 85 25 ea 1b 7f ..|..9...\4.%... 00:22:29.524 00000370 6e 66 94 b0 75 d6 c0 7c ce 34 a1 0b 93 df d3 71 nf..u..|.4.....q 00:22:29.524 00000380 9a 39 30 d4 72 01 16 53 70 08 90 84 d6 a5 56 8a .90.r..Sp.....V. 00:22:29.524 00000390 1c c3 c4 b0 ab 82 af e3 32 6f 57 f3 e0 c8 dc 66 ........2oW....f 00:22:29.524 000003a0 a8 e9 13 45 96 7d 2b df 5a ca fe c1 e4 6c 9f ac ...E.}+.Z....l.. 00:22:29.524 000003b0 3d d2 69 50 5a 68 3c a4 d7 6e 4e fb 59 b4 0e 35 =.iPZh<..nN.Y..5 00:22:29.524 000003c0 35 6d f3 55 bc 5e 5b bd a6 3a 6b 79 79 1d 5d d9 5m.U.^[..:kyy.]. 00:22:29.524 000003d0 81 bf 5f 54 95 26 0d 7c 41 91 94 89 ab dd b8 c3 .._T.&.|A....... 00:22:29.524 000003e0 78 08 08 09 59 68 f5 00 03 00 24 fa 71 5e ec ab x...Yh....$.q^.. 00:22:29.524 000003f0 28 15 16 d7 b0 bf f2 a6 4e 33 c3 80 ac 97 e6 cc (.......N3...... 00:22:29.524 dh secret: 00:22:29.524 00000000 5c f7 09 51 a9 5b 75 86 8a 80 f7 bb d7 bd cb 31 \..Q.[u........1 00:22:29.524 00000010 fc 23 d3 ba fe f2 81 4e 44 c5 83 cf 27 d2 a5 a3 .#.....ND...'... 00:22:29.524 00000020 20 46 de a9 06 17 c1 2d 4e 73 2e 25 4f a1 2c d6 F.....-Ns.%O.,. 00:22:29.524 00000030 d3 c0 73 a3 a6 4a 31 91 3c 81 10 76 60 d3 04 e4 ..s..J1.<..v`... 00:22:29.524 00000040 34 8d 8d be d7 2f ba 2b 04 b9 71 25 e4 1f c3 74 4..../.+..q%...t 00:22:29.524 00000050 40 50 da cf d6 8d a2 4a ea f5 98 11 b8 2a e3 f9 @P.....J.....*.. 00:22:29.524 00000060 13 d8 73 2b 40 56 66 2f 3b a4 58 2d 87 76 cf b7 ..s+@Vf/;.X-.v.. 00:22:29.524 00000070 4f ea b1 25 9b f9 c7 9b 55 65 86 9b 7b a6 22 a1 O..%....Ue..{.". 00:22:29.524 00000080 86 81 cd b8 00 05 ba bb c2 3c c9 24 40 13 e5 c2 .........<.$@... 00:22:29.524 00000090 c1 4e 18 71 43 50 6e ea 95 51 49 5a 06 86 91 59 .N.qCPn..QIZ...Y 00:22:29.524 000000a0 a3 e2 94 7f 4b 8a a4 76 d7 ea 95 d0 05 8f 6d d8 ....K..v......m. 00:22:29.524 000000b0 ad 87 29 23 ba 86 93 ca fd 23 71 4c 6a 8b f1 a1 ..)#.....#qLj... 00:22:29.524 000000c0 43 69 69 1d b3 95 72 52 87 89 6a 55 49 c9 9a f1 Cii...rR..jUI... 00:22:29.524 000000d0 dd 67 8d 1e b6 2c f6 dd 52 1e da a5 7d a3 c7 2d .g...,..R...}..- 00:22:29.524 000000e0 49 77 6c 6f fa cd dc db ba da 6a 8b ae 90 3e 2f Iwlo......j...>/ 00:22:29.524 000000f0 88 28 58 54 12 93 fa 19 75 fc 0b aa b2 07 7e 77 .(XT....u.....~w 00:22:29.524 00000100 0d b0 f8 be 11 1e 21 86 47 b7 04 19 0a de 1a b7 ......!.G....... 00:22:29.524 00000110 41 d4 e4 4d 53 0b 3e e8 54 dc 60 bd a1 1b 95 bf A..MS.>.T.`..... 00:22:29.524 00000120 ed 30 03 44 94 d3 f8 06 5d 54 31 86 96 c4 cb 1e .0.D....]T1..... 00:22:29.524 00000130 ec f2 8b 2e ea 03 d0 58 6b cd 69 81 2a ba 43 a6 .......Xk.i.*.C. 00:22:29.524 00000140 6f 2a c5 b9 a2 95 b9 3a 08 b8 b6 85 d5 f3 30 2f o*.....:......0/ 00:22:29.524 00000150 e6 f5 9c 38 98 14 73 92 b4 a5 25 a7 63 19 d5 ec ...8..s...%.c... 00:22:29.524 00000160 9b 92 f0 5b 6f 4a 42 ec a9 40 8e d6 4b eb a9 86 ...[oJB..@..K... 00:22:29.524 00000170 21 f8 27 20 1f 51 70 e0 bc 1b f6 bb 12 bb ec c5 !.' .Qp......... 00:22:29.524 00000180 15 8d 19 8f e2 6f 10 e8 13 85 b8 3a a8 2e e8 c9 .....o.....:.... 00:22:29.524 00000190 89 96 2d fc d4 3d cf 24 5f e2 aa 50 77 0c ec ce ..-..=.$_..Pw... 00:22:29.524 000001a0 2d 72 75 81 0d a5 b5 cf 1e 6a 81 e0 e2 19 83 40 -ru......j.....@ 00:22:29.524 000001b0 1e a0 2e 47 0a 6f 06 05 73 e9 35 b3 63 e0 5b 77 ...G.o..s.5.c.[w 00:22:29.524 000001c0 00 f9 a7 f8 48 1c 50 81 62 70 d5 f1 0d ed 03 47 ....H.P.bp.....G 00:22:29.524 000001d0 18 1b 32 b3 b5 b6 f8 a0 55 ef e1 9b 18 88 82 36 ..2.....U......6 00:22:29.524 000001e0 be d1 e9 6f 06 22 24 b4 3e 6c 90 b2 eb e6 19 f5 ...o."$.>l...... 00:22:29.524 000001f0 be 03 b9 79 15 f1 56 71 5d 65 22 5b d4 a3 f5 33 ...y..Vq]e"[...3 00:22:29.524 00000200 67 37 bd 75 d1 d1 f3 9b 99 13 0f 5a 06 74 6b a2 g7.u.......Z.tk. 00:22:29.524 00000210 7f 64 5a bb 86 11 d1 b9 5f 5c 46 50 1b bd 22 c3 .dZ....._\FP..". 00:22:29.524 00000220 8e aa ea 61 ba 8b 9a 89 a0 ca e3 86 93 47 03 ac ...a.........G.. 00:22:29.524 00000230 eb c9 0a 10 de be ac 3d 20 d3 b8 e4 7f 91 2d 36 .......= .....-6 00:22:29.524 00000240 18 31 fe 9d 85 b9 29 e8 03 d3 b8 57 42 62 72 4b .1....)....WBbrK 00:22:29.524 00000250 6a 1d d6 3e f2 40 b1 31 53 7e cc 56 86 ef ee 18 j..>.@.1S~.V.... 00:22:29.524 00000260 9e 8e 14 28 45 23 71 01 a7 23 b6 ef 98 0c 01 0f ...(E#q..#...... 00:22:29.524 00000270 4c b6 bf a9 f7 3b 77 a7 37 90 5d 65 31 76 96 04 L....;w.7.]e1v.. 00:22:29.524 00000280 2d ea 4b fa 2b da 69 97 6a f5 fd fe 98 6c 16 46 -.K.+.i.j....l.F 00:22:29.524 00000290 18 42 88 ed 84 d1 56 29 cd 81 bc 65 1a 38 06 da .B....V)...e.8.. 00:22:29.524 000002a0 5e a9 75 36 e1 7b ab be 97 d0 72 12 5f d3 4c 3b ^.u6.{....r._.L; 00:22:29.524 000002b0 be 59 99 50 e6 59 ee 7b 09 57 8c 47 ee 34 76 bc .Y.P.Y.{.W.G.4v. 00:22:29.524 000002c0 57 65 8b 6e a3 37 8a fa 04 8f 38 93 2e 5a 00 2b We.n.7....8..Z.+ 00:22:29.524 000002d0 d6 ad 4b 62 cd 44 18 ad 7a 08 9e f0 d6 fe c1 15 ..Kb.D..z....... 00:22:29.524 000002e0 13 d5 ec ff b8 7a 9a 5c 85 4e 7c 03 74 5f c1 58 .....z.\.N|.t_.X 00:22:29.524 000002f0 68 ef d9 88 92 a3 12 14 d1 d7 56 ba 51 13 5f de h.........V.Q._. 00:22:29.524 00000300 98 e0 cb 17 f3 b5 59 57 04 ce 6a b2 1b 27 8c a0 ......YW..j..'.. 00:22:29.524 00000310 a7 50 38 13 41 38 4a 3c ca f9 8c 35 81 29 69 e9 .P8.A8J<...5.)i. 00:22:29.524 00000320 33 fa 32 39 0a e7 24 df 4f 28 65 e8 fb 8e 04 0a 3.29..$.O(e..... 00:22:29.524 00000330 14 ed 3f 22 9d 62 01 c4 17 90 78 a2 ab 9c 62 99 ..?".b....x...b. 00:22:29.524 00000340 83 f4 8f da 56 50 fa 49 59 cb d7 00 df 16 93 43 ....VP.IY......C 00:22:29.524 00000350 51 1a d8 78 a1 00 15 4b 1e 3f e8 ee 88 5f e3 16 Q..x...K.?..._.. 00:22:29.524 00000360 e4 ff 47 43 1c 1f ee c4 ea 3c c2 95 06 10 8c fc ..GC.....<...... 00:22:29.524 00000370 73 af ab be 40 db 13 e1 1d b0 39 5a 17 f5 cf c5 s...@.....9Z.... 00:22:29.524 00000380 e3 2f e4 14 0c 58 5c 0c 0e 3a 92 ca 86 f2 cf e8 ./...X\..:...... 00:22:29.524 00000390 c2 f5 ad 87 33 f4 52 59 a4 9a 88 ac 24 03 1f 0d ....3.RY....$... 00:22:29.524 000003a0 8d 78 99 ce 3a 15 84 17 11 07 fc 9f fe db ed 8c .x..:........... 00:22:29.524 000003b0 04 56 18 71 ed 80 30 06 81 5e 11 66 24 4f 0e 8b .V.q..0..^.f$O.. 00:22:29.524 000003c0 78 2d 63 55 ef b7 b9 95 bb e9 c0 bf 3e e1 71 2f x-cU........>.q/ 00:22:29.524 000003d0 da 1e c3 28 f7 10 d9 37 8c c3 3e 2d 7a d5 c7 ef ...(...7..>-z... 00:22:29.524 000003e0 c2 b5 1a d8 64 d6 c5 87 25 84 f1 67 f5 58 00 0f ....d...%..g.X.. 00:22:29.524 000003f0 00 b9 38 a5 16 d5 8d 8b 30 e6 ba db 4d 6c bd 06 ..8.....0...Ml.. 00:22:29.524 [2024-09-27 13:27:19.784674] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key2, hash=2, dhgroup=5, seq=3775755268, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.524 [2024-09-27 13:27:19.785140] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.524 [2024-09-27 13:27:19.871948] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.524 [2024-09-27 13:27:19.872440] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.524 [2024-09-27 13:27:19.872635] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.524 [2024-09-27 13:27:19.872960] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.524 [2024-09-27 13:27:20.022831] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.524 [2024-09-27 13:27:20.023147] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.524 [2024-09-27 13:27:20.023410] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 5 (ffdhe8192) 00:22:29.524 [2024-09-27 13:27:20.023545] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.524 [2024-09-27 13:27:20.023906] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.524 ctrlr pubkey: 00:22:29.524 00000000 fc 30 7b 4c f6 1c d0 52 ce fe 00 30 15 f5 a5 01 .0{L...R...0.... 00:22:29.524 00000010 06 f7 bf 7f 24 8a fe 6e f0 7c 6d 9a f6 87 68 57 ....$..n.|m...hW 00:22:29.524 00000020 87 80 cf c0 2d 7d 7e 0d d3 87 85 bc a5 a2 51 b1 ....-}~.......Q. 00:22:29.524 00000030 69 ba d2 dc cb a6 08 eb a4 d9 ef f3 c7 56 83 59 i............V.Y 00:22:29.524 00000040 1b 74 27 d0 e7 fd 38 c4 a1 9e 92 c0 f1 bc cd 84 .t'...8......... 00:22:29.524 00000050 c0 0b 61 91 43 77 d0 af 50 2f 9f 94 fd 17 98 af ..a.Cw..P/...... 00:22:29.524 00000060 be a6 76 b1 03 c9 bf b5 47 cd df ab 30 12 0c 99 ..v.....G...0... 00:22:29.524 00000070 48 23 b5 85 a9 6b 72 b2 56 37 db c3 fa ea 55 c0 H#...kr.V7....U. 00:22:29.524 00000080 57 47 f5 cf 0b 7a b5 a5 26 3d ad 4b d1 06 a7 e8 WG...z..&=.K.... 00:22:29.524 00000090 85 26 80 aa 7d ab 8e df b6 66 bf bd dd 9c 66 e2 .&..}....f....f. 00:22:29.524 000000a0 74 57 6a 56 5b ae f0 11 b1 4b bc db 48 ab a4 47 tWjV[....K..H..G 00:22:29.524 000000b0 a5 7a 7a 81 04 29 01 0e da 08 8a ca 7b 4a bd 91 .zz..)......{J.. 00:22:29.524 000000c0 27 b9 87 27 7a b4 6c fa bb 0f a3 75 98 a0 59 17 '..'z.l....u..Y. 00:22:29.524 000000d0 81 14 73 2f a8 01 f7 13 71 38 7f 04 db 6c 3a c4 ..s/....q8...l:. 00:22:29.524 000000e0 32 86 72 96 80 0a ea 74 e7 78 b5 14 65 2c ab e9 2.r....t.x..e,.. 00:22:29.524 000000f0 b5 7a a0 81 7e b4 4a 83 c5 ed 43 d0 37 34 96 4d .z..~.J...C.74.M 00:22:29.524 00000100 cd 4b d3 79 f0 d4 47 4b 2e 40 80 05 37 1c 4f 14 .K.y..GK.@..7.O. 00:22:29.524 00000110 3f 17 56 ee a0 d4 e1 63 54 fa 4f 55 e3 dc 6c 73 ?.V....cT.OU..ls 00:22:29.524 00000120 94 38 5e 5b ec 74 f6 33 3b 2d 99 0a ba e4 a5 0d .8^[.t.3;-...... 00:22:29.524 00000130 f6 a4 a5 42 44 85 2e 90 66 92 d3 1d 9e d2 60 0b ...BD...f.....`. 00:22:29.524 00000140 1a 2e 3a 1b a6 7b 88 41 14 cf 8b ec e9 a9 59 98 ..:..{.A......Y. 00:22:29.524 00000150 5f 1d 70 6e 3c 57 15 80 d6 bd e4 8a ed f8 d0 15 _.pn.2K...D.U._f..= 00:22:29.524 00000230 c7 c3 80 65 d3 60 15 37 5d 5c e8 cc 99 89 47 f5 ...e.`.7]\....G. 00:22:29.524 00000240 86 01 e0 d0 01 55 3b ad df 63 37 52 62 e3 7b c2 .....U;..c7Rb.{. 00:22:29.524 00000250 cd aa e7 f8 d2 36 99 88 10 d2 7e 05 b3 2d 62 ea .....6....~..-b. 00:22:29.524 00000260 e3 bd 7e 29 26 82 68 b0 a8 00 51 5e 0d 93 d6 0a ..~)&.h...Q^.... 00:22:29.524 00000270 47 74 91 da 23 57 39 e3 e4 4d 96 c4 02 4f 84 f7 Gt..#W9..M...O.. 00:22:29.524 00000280 97 d7 5c df 3b 33 90 c0 ed 8f 41 0f c2 b1 f0 c4 ..\.;3....A..... 00:22:29.525 00000290 5a 63 3e 50 bc d2 5d f6 b4 59 17 30 60 62 94 7d Zc>P..]..Y.0`b.} 00:22:29.525 000002a0 84 fe 3a 61 22 d7 3d 46 78 5c 65 c6 18 f1 73 a7 ..:a".=Fx\e...s. 00:22:29.525 000002b0 72 aa f5 09 97 21 4c 23 bb 8a b9 1f be 3e 70 fd r....!L#.....>p. 00:22:29.525 000002c0 d8 df 7b 7b 4d 27 4e 3d 16 29 f8 b1 24 38 d8 93 ..{{M'N=.)..$8.. 00:22:29.525 000002d0 b6 31 c3 5c b0 f6 98 05 f6 ea ce 9c fc 60 46 32 .1.\.........`F2 00:22:29.525 000002e0 a4 ad 00 cc f3 7e 80 76 ec 74 a6 22 32 cf 76 75 .....~.v.t."2.vu 00:22:29.525 000002f0 c3 34 a3 d4 d9 a3 61 cb 69 21 b8 f7 62 e6 3d 23 .4....a.i!..b.=# 00:22:29.525 00000300 42 a1 8d 91 ef 8b 28 01 01 f1 72 8b ac cc c9 6e B.....(...r....n 00:22:29.525 00000310 06 d4 db ca a8 9d fb 08 c9 fe 84 7c 19 68 e0 dd ...........|.h.. 00:22:29.525 00000320 63 49 a4 27 c6 56 26 61 4b d2 f3 dc a7 fc 0c 7b cI.'.V&aK......{ 00:22:29.525 00000330 7e 3e 16 65 da 86 34 a3 5d 98 c1 9a ef ad 2c d5 ~>.e..4.].....,. 00:22:29.525 00000340 76 d5 9c 63 05 d6 17 04 9a e9 2c 07 29 ed 21 b8 v..c......,.).!. 00:22:29.525 00000350 43 5d 58 e2 ee bb 4c f5 1a d8 b6 f4 96 b9 71 3b C]X...L.......q; 00:22:29.525 00000360 c1 55 56 33 39 d7 09 ea 8e 45 70 9e d8 bc 51 2f .UV39....Ep...Q/ 00:22:29.525 00000370 93 71 36 18 22 1f a0 bd de 7e bc 75 0b 93 a8 d4 .q6."....~.u.... 00:22:29.525 00000380 eb 88 19 ac 52 a0 a1 10 44 8a 7a 48 e7 58 8d a5 ....R...D.zH.X.. 00:22:29.525 00000390 c8 22 6b 67 9c 9f dd ff 66 b2 f7 98 2c ed 03 1f ."kg....f...,... 00:22:29.525 000003a0 44 aa 58 1a de c9 b2 14 86 41 42 07 04 0e fd 76 D.X......AB....v 00:22:29.525 000003b0 6e 0c 73 cb 4b de 78 25 45 14 ee 2c 40 b3 64 25 n.s.K.x%E..,@.d% 00:22:29.525 000003c0 4e 03 53 30 6b 2f ab ac 1e ca bc 6f e2 5b b1 40 N.S0k/.....o.[.@ 00:22:29.525 000003d0 63 5d db ac 1f 94 7b d0 20 e0 33 fd 9d 2d f4 e8 c]....{. .3..-.. 00:22:29.525 000003e0 f3 09 8e e4 3e 19 72 8e 9e ed 43 41 95 4a 36 8c ....>.r...CA.J6. 00:22:29.525 000003f0 77 52 93 9d d9 dc 49 b0 02 e1 bd 69 7a 9a 87 a8 wR....I....iz... 00:22:29.525 host pubkey: 00:22:29.525 00000000 3d 76 13 a6 3e 84 6b 53 b9 51 8c 0c 02 7a bf 2a =v..>.kS.Q...z.* 00:22:29.525 00000010 3d 03 c3 8e 90 f6 87 21 a7 f2 92 cc 55 cf 95 92 =......!....U... 00:22:29.525 00000020 1a c4 53 40 0b 53 85 24 d3 b2 f5 3f dd a3 89 cc ..S@.S.$...?.... 00:22:29.525 00000030 75 bd c2 76 e2 bc 62 c8 88 d5 15 e6 67 88 f6 3a u..v..b.....g..: 00:22:29.525 00000040 6e 8f 5c 65 df 60 08 b8 ea a3 46 6d 69 e3 df c3 n.\e.`....Fmi... 00:22:29.525 00000050 65 7c 19 f0 03 37 f9 f9 fc 67 71 16 8c ce ab c2 e|...7...gq..... 00:22:29.525 00000060 db 99 9a fa d8 94 17 71 f6 47 65 02 3b f5 7b 4b .......q.Ge.;.{K 00:22:29.525 00000070 86 16 16 b6 ce 7d 74 dc 88 ce 15 c4 21 cb 8e ae .....}t.....!... 00:22:29.525 00000080 a2 a8 f7 7c e0 92 01 b5 73 55 e3 5c 2e 41 a4 9c ...|....sU.\.A.. 00:22:29.525 00000090 7c 91 43 f5 92 40 61 29 bf c3 a7 2d 26 40 26 2b |.C..@a)...-&@&+ 00:22:29.525 000000a0 61 e5 52 06 92 88 a4 e7 f2 27 9e b5 89 e7 88 ee a.R......'...... 00:22:29.525 000000b0 39 bf e1 06 21 91 3d 36 72 1b 97 f6 c6 d6 b9 cf 9...!.=6r....... 00:22:29.525 000000c0 54 3d bf 80 40 5b cf ee a1 33 d6 86 81 cf b8 9a T=..@[...3...... 00:22:29.525 000000d0 6b 23 9f 23 6b 1d e9 5a 65 b0 fa ac 82 f3 46 a4 k#.#k..Ze.....F. 00:22:29.525 000000e0 1a 3e f7 06 87 a9 71 98 1a 50 06 d2 81 39 ad c1 .>....q..P...9.. 00:22:29.525 000000f0 50 93 fe a8 3d a1 c0 b8 3a dc 35 d9 99 e2 f5 1a P...=...:.5..... 00:22:29.525 00000100 49 a6 14 7d 70 20 77 a9 d2 1b d6 8e 0e 9d 35 c3 I..}p w.......5. 00:22:29.525 00000110 79 65 a0 98 4c f5 2c 31 ec b3 8f 4e 1d 3c 6f 58 ye..L.,1...N..4 00:22:29.525 000002f0 6c 27 98 84 bf 6c 1b 87 5e f3 dd eb dd 52 51 1c l'...l..^....RQ. 00:22:29.525 00000300 a8 93 78 ac c6 ad 0d 95 7f 7f 6c e8 4a 77 95 83 ..x.......l.Jw.. 00:22:29.525 00000310 1c 40 f2 f4 9b 29 86 7e 13 03 04 cd 5c 54 83 0d .@...).~....\T.. 00:22:29.525 00000320 bf 60 39 c3 ce bd 77 12 bc fa c5 ac 9a f3 23 ba .`9...w.......#. 00:22:29.525 00000330 7e 96 2f 22 e2 fb 23 34 a2 d4 72 7a 12 5d 2b 93 ~./"..#4..rz.]+. 00:22:29.525 00000340 a8 46 82 05 18 b5 88 b3 ad 20 8a cd e3 6f fd 78 .F....... ...o.x 00:22:29.525 00000350 0f 9d 3b 00 e6 39 c4 47 c9 dc 7e 06 80 9a a6 8f ..;..9.G..~..... 00:22:29.525 00000360 3e 69 92 b6 ad c0 91 79 66 66 58 32 b0 9a 23 e1 >i.....yffX2..#. 00:22:29.525 00000370 82 88 1f 01 49 51 f5 34 f5 26 25 24 5b a8 ec bd ....IQ.4.&%$[... 00:22:29.525 00000380 c2 9b 96 ae c4 6a cf a3 b3 88 39 38 06 84 2f 70 .....j....98../p 00:22:29.525 00000390 8d 62 24 63 a5 d7 37 77 fb 22 76 bd dc c5 0b 59 .b$c..7w."v....Y 00:22:29.525 000003a0 03 cb 5f 4c a1 53 51 f9 8c a9 18 d3 d7 24 df e5 .._L.SQ......$.. 00:22:29.525 000003b0 6d 76 fa d8 e6 3b 44 91 b6 49 ef cf 4c 82 cc 17 mv...;D..I..L... 00:22:29.525 000003c0 69 f6 92 d1 7d 3e 48 14 1b a9 59 b7 75 44 24 52 i...}>H...Y.uD$R 00:22:29.525 000003d0 9e 74 6e 2c bc 74 84 56 4f a3 9c 76 6f 9f 38 84 .tn,.t.VO..vo.8. 00:22:29.525 000003e0 f3 46 9d 10 64 d9 00 e1 30 25 aa 16 ba 34 5d ca .F..d...0%...4]. 00:22:29.525 000003f0 23 2f fb 3a a8 8b e0 1e b8 0a 47 2b a5 86 fa 39 #/.:......G+...9 00:22:29.525 dh secret: 00:22:29.525 00000000 08 90 ca 32 45 d5 ff 08 d1 b4 2c da ea ae 63 1d ...2E.....,...c. 00:22:29.525 00000010 5f a5 2c 29 be 16 bd b1 9e ed 23 6c c9 b0 dc c9 _.,)......#l.... 00:22:29.525 00000020 04 fa 3c 7e 4a 63 57 18 74 e7 29 6e c7 db db 2b ..<~JcW.t.)n...+ 00:22:29.525 00000030 fa 64 87 88 c0 ee 9d be 40 f8 2d 65 6b 60 98 b5 .d......@.-ek`.. 00:22:29.525 00000040 33 32 26 34 98 d6 e7 d9 ce c7 e8 b3 a5 c1 d0 32 32&4...........2 00:22:29.525 00000050 71 32 76 a7 52 1e 96 6d 62 78 f9 49 e2 a6 4c fe q2v.R..mbx.I..L. 00:22:29.525 00000060 d5 76 25 29 d4 67 7f a4 c4 55 b3 22 5f 41 df 46 .v%).g...U."_A.F 00:22:29.525 00000070 74 b4 eb 2e f1 3a 86 4c e4 2b f8 e8 cb 3a bc e9 t....:.L.+...:.. 00:22:29.525 00000080 c9 7f a6 39 f4 02 ce 6c cb da 91 ce f8 cd aa 02 ...9...l........ 00:22:29.525 00000090 f6 18 48 61 c6 e3 ca 54 8a e8 36 05 dd 5c 57 44 ..Ha...T..6..\WD 00:22:29.525 000000a0 e5 06 c0 4e f8 16 45 70 ec 85 a8 c3 e5 ce 17 6b ...N..Ep.......k 00:22:29.525 000000b0 ca 92 40 54 8c 10 25 86 cd 83 1b ad 57 62 b5 aa ..@T..%.....Wb.. 00:22:29.525 000000c0 05 cc a6 c5 df 2f 04 9a 42 c0 d9 fe b8 a2 06 e5 ...../..B....... 00:22:29.525 000000d0 3a a1 05 11 1c c6 4c 5f 01 4f 1b 6d 64 65 37 35 :.....L_.O.mde75 00:22:29.525 000000e0 b9 18 ff 7e 09 20 9c 2d d1 e7 33 65 c8 49 b8 ce ...~. .-..3e.I.. 00:22:29.525 000000f0 bb 34 cb 9b 97 97 67 d4 9e 58 27 28 07 7d 56 22 .4....g..X'(.}V" 00:22:29.525 00000100 ff 74 ba a6 83 ec fe 9b 01 4f ad de 46 16 d7 4c .t.......O..F..L 00:22:29.525 00000110 4a 64 d4 4e 2c 7d c4 d6 82 31 14 05 1d fd 2a 5a Jd.N,}...1....*Z 00:22:29.525 00000120 dc 28 16 a9 d6 d4 2e c3 a0 8d 8a 1d 1f da c0 97 .(.............. 00:22:29.525 00000130 ab 89 12 30 4e a3 5d 71 5e 86 81 d5 79 af cb 6b ...0N.]q^...y..k 00:22:29.525 00000140 d3 8e 7d f0 63 88 65 a0 3f 65 b8 60 a5 c1 a3 f0 ..}.c.e.?e.`.... 00:22:29.525 00000150 87 56 ae 88 7f 95 77 05 32 d4 1a a9 20 d2 f4 ac .V....w.2... ... 00:22:29.525 00000160 13 de 29 b2 56 4c 91 75 c7 2d 59 6a a1 aa cd ef ..).VL.u.-Yj.... 00:22:29.525 00000170 84 31 01 1c 5e a9 3e 64 0f 26 47 2d 85 a0 ad 48 .1..^.>d.&G-...H 00:22:29.525 00000180 76 97 6d 97 ff a7 fc 59 9a 1d 73 79 2e f9 6c 8c v.m....Y..sy..l. 00:22:29.525 00000190 37 b6 16 91 7f 70 0a 25 8d 0e e1 f2 c7 f2 5d c3 7....p.%......]. 00:22:29.525 000001a0 53 92 a2 27 be e4 f1 8d e2 ed 9f 46 d6 55 a0 2c S..'.......F.U., 00:22:29.525 000001b0 cd 2f f9 e0 0e 08 8f 1e 80 8d 9e f8 29 7a 9c c7 ./..........)z.. 00:22:29.525 000001c0 da 84 86 04 10 00 8a 83 0e 98 fa 97 fb 79 0b 36 .............y.6 00:22:29.525 000001d0 12 58 a9 4e 1a 5c 38 63 7a 5f 5e 98 85 b9 a8 46 .X.N.\8cz_^....F 00:22:29.525 000001e0 b6 27 67 c9 61 5b 13 1d af a8 04 3e 98 f8 44 68 .'g.a[.....>..Dh 00:22:29.525 000001f0 7c d2 e3 4c b2 91 3a 4f 89 74 21 c0 8b ba 32 9b |..L..:O.t!...2. 00:22:29.525 00000200 48 8b d2 ab 51 cc c5 5f 4e 37 49 2c 31 10 ff 33 H...Q.._N7I,1..3 00:22:29.525 00000210 2c bf a1 d7 35 8d 59 07 8d 1a 65 f6 3a 0f c0 84 ,...5.Y...e.:... 00:22:29.525 00000220 bf 2a 10 16 46 30 a2 63 40 2f e7 01 78 0e 36 f6 .*..F0.c@/..x.6. 00:22:29.525 00000230 4d ec 89 74 0a e8 e6 f8 c1 9a 41 30 e4 cb 0d a5 M..t......A0.... 00:22:29.525 00000240 19 c3 c1 08 47 cc 15 ee 67 37 b6 33 4a 9d 9c 9b ....G...g7.3J... 00:22:29.525 00000250 4a 2b c7 41 1d a3 ed 37 16 6f 84 73 f4 28 0c d9 J+.A...7.o.s.(.. 00:22:29.525 00000260 40 a0 50 aa d6 b7 00 f7 2d cd 0c e5 2d cb e6 5d @.P.....-...-..] 00:22:29.525 00000270 38 8d 6b f3 bc 16 b4 5e 82 7e 56 e4 c7 c7 51 47 8.k....^.~V...QG 00:22:29.525 00000280 e7 e8 34 ff 62 df 2d b1 2b 9d 3f 33 5f da 54 43 ..4.b.-.+.?3_.TC 00:22:29.525 00000290 27 bb c6 63 a0 86 8d cf fb d2 d4 67 5f ae 0e d6 '..c.......g_... 00:22:29.525 000002a0 b2 34 80 d2 4d 5e b1 7e 38 4d 13 12 bf f7 e8 a2 .4..M^.~8M...... 00:22:29.525 000002b0 66 25 f5 09 46 5e 0a 80 d0 24 9d 23 31 53 b3 c6 f%..F^...$.#1S.. 00:22:29.525 000002c0 7d 0a f2 d1 4e f0 e9 1c 78 65 78 e3 70 bc cd 93 }...N...xex.p... 00:22:29.525 000002d0 2b 30 a1 44 01 24 63 f7 46 1c 0f e0 ee 8d 38 13 +0.D.$c.F.....8. 00:22:29.525 000002e0 2b 4f 1d 42 62 59 c6 21 f3 7d ec e9 09 51 3b 46 +O.BbY.!.}...Q;F 00:22:29.525 000002f0 6c 62 d7 72 58 33 d3 a6 f7 01 d1 c0 9e a1 d3 b0 lb.rX3.......... 00:22:29.525 00000300 9a c4 23 90 0d ed 5b e9 a9 5e 6e 14 39 95 41 12 ..#...[..^n.9.A. 00:22:29.525 00000310 a3 a6 29 c6 2d 3c 06 9a ea 85 e7 01 d6 d6 65 ef ..).-<........e. 00:22:29.525 00000320 da a6 74 10 60 b8 50 e9 e2 e3 0f 33 ca b4 e2 26 ..t.`.P....3...& 00:22:29.525 00000330 8f bf 99 1e 5d 37 d3 8c d6 ba b3 f3 93 b8 8e 9d ....]7.......... 00:22:29.525 00000340 1f cd 93 d8 e4 a1 84 c8 1f 51 08 04 70 c5 8c 4a .........Q..p..J 00:22:29.525 00000350 08 c0 a7 5b 98 a1 62 35 24 63 b5 f4 c7 60 96 3c ...[..b5$c...`.< 00:22:29.526 00000360 01 47 75 ba db a4 dc 4d 96 5f 29 5e fd 98 a9 48 .Gu....M._)^...H 00:22:29.526 00000370 e8 68 c1 bf 85 88 9f df 3e e9 84 f1 c1 ec f8 af .h......>....... 00:22:29.526 00000380 4a 45 88 ed bd 6b 34 bd d6 1b ee 2f 7d 03 2d 0e JE...k4..../}.-. 00:22:29.526 00000390 83 a7 cd 0d b5 67 43 d8 36 69 0f 76 28 14 36 a2 .....gC.6i.v(.6. 00:22:29.526 000003a0 07 d6 2c 7f 51 71 d0 f9 88 19 0d 15 e1 e2 92 00 ..,.Qq.......... 00:22:29.526 000003b0 c6 59 d5 6a 5f d2 65 a2 53 d1 91 6a 9b 2d 0c bb .Y.j_.e.S..j.-.. 00:22:29.526 000003c0 2e 17 3e 09 74 46 48 05 f8 18 17 84 fe 1c b6 fc ..>.tFH......... 00:22:29.526 000003d0 27 e5 61 a5 5a da 35 9b 67 77 8c 78 9b 91 d6 ff '.a.Z.5.gw.x.... 00:22:29.526 000003e0 30 4d 67 68 7a 11 1a 63 f6 0c cf e4 c2 0c 81 dd 0Mghz..c........ 00:22:29.526 000003f0 1a ed e8 5d ec 5a 36 65 77 fd cc e7 8e 1d a2 aa ...].Z6ew....... 00:22:29.526 [2024-09-27 13:27:20.191998] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key3, hash=2, dhgroup=5, seq=3775755269, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.526 [2024-09-27 13:27:20.192354] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.526 [2024-09-27 13:27:20.280274] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.526 [2024-09-27 13:27:20.280741] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.526 [2024-09-27 13:27:20.280913] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.526 [2024-09-27 13:27:20.281230] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.526 [2024-09-27 13:27:20.333127] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.526 [2024-09-27 13:27:20.333456] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.526 [2024-09-27 13:27:20.333752] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 5 (ffdhe8192) 00:22:29.526 [2024-09-27 13:27:20.333921] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.526 [2024-09-27 13:27:20.334320] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.526 ctrlr pubkey: 00:22:29.526 00000000 fc 30 7b 4c f6 1c d0 52 ce fe 00 30 15 f5 a5 01 .0{L...R...0.... 00:22:29.526 00000010 06 f7 bf 7f 24 8a fe 6e f0 7c 6d 9a f6 87 68 57 ....$..n.|m...hW 00:22:29.526 00000020 87 80 cf c0 2d 7d 7e 0d d3 87 85 bc a5 a2 51 b1 ....-}~.......Q. 00:22:29.526 00000030 69 ba d2 dc cb a6 08 eb a4 d9 ef f3 c7 56 83 59 i............V.Y 00:22:29.526 00000040 1b 74 27 d0 e7 fd 38 c4 a1 9e 92 c0 f1 bc cd 84 .t'...8......... 00:22:29.526 00000050 c0 0b 61 91 43 77 d0 af 50 2f 9f 94 fd 17 98 af ..a.Cw..P/...... 00:22:29.526 00000060 be a6 76 b1 03 c9 bf b5 47 cd df ab 30 12 0c 99 ..v.....G...0... 00:22:29.526 00000070 48 23 b5 85 a9 6b 72 b2 56 37 db c3 fa ea 55 c0 H#...kr.V7....U. 00:22:29.526 00000080 57 47 f5 cf 0b 7a b5 a5 26 3d ad 4b d1 06 a7 e8 WG...z..&=.K.... 00:22:29.526 00000090 85 26 80 aa 7d ab 8e df b6 66 bf bd dd 9c 66 e2 .&..}....f....f. 00:22:29.526 000000a0 74 57 6a 56 5b ae f0 11 b1 4b bc db 48 ab a4 47 tWjV[....K..H..G 00:22:29.526 000000b0 a5 7a 7a 81 04 29 01 0e da 08 8a ca 7b 4a bd 91 .zz..)......{J.. 00:22:29.526 000000c0 27 b9 87 27 7a b4 6c fa bb 0f a3 75 98 a0 59 17 '..'z.l....u..Y. 00:22:29.526 000000d0 81 14 73 2f a8 01 f7 13 71 38 7f 04 db 6c 3a c4 ..s/....q8...l:. 00:22:29.526 000000e0 32 86 72 96 80 0a ea 74 e7 78 b5 14 65 2c ab e9 2.r....t.x..e,.. 00:22:29.526 000000f0 b5 7a a0 81 7e b4 4a 83 c5 ed 43 d0 37 34 96 4d .z..~.J...C.74.M 00:22:29.526 00000100 cd 4b d3 79 f0 d4 47 4b 2e 40 80 05 37 1c 4f 14 .K.y..GK.@..7.O. 00:22:29.526 00000110 3f 17 56 ee a0 d4 e1 63 54 fa 4f 55 e3 dc 6c 73 ?.V....cT.OU..ls 00:22:29.526 00000120 94 38 5e 5b ec 74 f6 33 3b 2d 99 0a ba e4 a5 0d .8^[.t.3;-...... 00:22:29.526 00000130 f6 a4 a5 42 44 85 2e 90 66 92 d3 1d 9e d2 60 0b ...BD...f.....`. 00:22:29.526 00000140 1a 2e 3a 1b a6 7b 88 41 14 cf 8b ec e9 a9 59 98 ..:..{.A......Y. 00:22:29.526 00000150 5f 1d 70 6e 3c 57 15 80 d6 bd e4 8a ed f8 d0 15 _.pn.2K...D.U._f..= 00:22:29.526 00000230 c7 c3 80 65 d3 60 15 37 5d 5c e8 cc 99 89 47 f5 ...e.`.7]\....G. 00:22:29.526 00000240 86 01 e0 d0 01 55 3b ad df 63 37 52 62 e3 7b c2 .....U;..c7Rb.{. 00:22:29.526 00000250 cd aa e7 f8 d2 36 99 88 10 d2 7e 05 b3 2d 62 ea .....6....~..-b. 00:22:29.526 00000260 e3 bd 7e 29 26 82 68 b0 a8 00 51 5e 0d 93 d6 0a ..~)&.h...Q^.... 00:22:29.526 00000270 47 74 91 da 23 57 39 e3 e4 4d 96 c4 02 4f 84 f7 Gt..#W9..M...O.. 00:22:29.526 00000280 97 d7 5c df 3b 33 90 c0 ed 8f 41 0f c2 b1 f0 c4 ..\.;3....A..... 00:22:29.526 00000290 5a 63 3e 50 bc d2 5d f6 b4 59 17 30 60 62 94 7d Zc>P..]..Y.0`b.} 00:22:29.526 000002a0 84 fe 3a 61 22 d7 3d 46 78 5c 65 c6 18 f1 73 a7 ..:a".=Fx\e...s. 00:22:29.526 000002b0 72 aa f5 09 97 21 4c 23 bb 8a b9 1f be 3e 70 fd r....!L#.....>p. 00:22:29.526 000002c0 d8 df 7b 7b 4d 27 4e 3d 16 29 f8 b1 24 38 d8 93 ..{{M'N=.)..$8.. 00:22:29.526 000002d0 b6 31 c3 5c b0 f6 98 05 f6 ea ce 9c fc 60 46 32 .1.\.........`F2 00:22:29.526 000002e0 a4 ad 00 cc f3 7e 80 76 ec 74 a6 22 32 cf 76 75 .....~.v.t."2.vu 00:22:29.526 000002f0 c3 34 a3 d4 d9 a3 61 cb 69 21 b8 f7 62 e6 3d 23 .4....a.i!..b.=# 00:22:29.526 00000300 42 a1 8d 91 ef 8b 28 01 01 f1 72 8b ac cc c9 6e B.....(...r....n 00:22:29.526 00000310 06 d4 db ca a8 9d fb 08 c9 fe 84 7c 19 68 e0 dd ...........|.h.. 00:22:29.526 00000320 63 49 a4 27 c6 56 26 61 4b d2 f3 dc a7 fc 0c 7b cI.'.V&aK......{ 00:22:29.526 00000330 7e 3e 16 65 da 86 34 a3 5d 98 c1 9a ef ad 2c d5 ~>.e..4.].....,. 00:22:29.526 00000340 76 d5 9c 63 05 d6 17 04 9a e9 2c 07 29 ed 21 b8 v..c......,.).!. 00:22:29.526 00000350 43 5d 58 e2 ee bb 4c f5 1a d8 b6 f4 96 b9 71 3b C]X...L.......q; 00:22:29.526 00000360 c1 55 56 33 39 d7 09 ea 8e 45 70 9e d8 bc 51 2f .UV39....Ep...Q/ 00:22:29.526 00000370 93 71 36 18 22 1f a0 bd de 7e bc 75 0b 93 a8 d4 .q6."....~.u.... 00:22:29.526 00000380 eb 88 19 ac 52 a0 a1 10 44 8a 7a 48 e7 58 8d a5 ....R...D.zH.X.. 00:22:29.526 00000390 c8 22 6b 67 9c 9f dd ff 66 b2 f7 98 2c ed 03 1f ."kg....f...,... 00:22:29.526 000003a0 44 aa 58 1a de c9 b2 14 86 41 42 07 04 0e fd 76 D.X......AB....v 00:22:29.526 000003b0 6e 0c 73 cb 4b de 78 25 45 14 ee 2c 40 b3 64 25 n.s.K.x%E..,@.d% 00:22:29.526 000003c0 4e 03 53 30 6b 2f ab ac 1e ca bc 6f e2 5b b1 40 N.S0k/.....o.[.@ 00:22:29.526 000003d0 63 5d db ac 1f 94 7b d0 20 e0 33 fd 9d 2d f4 e8 c]....{. .3..-.. 00:22:29.526 000003e0 f3 09 8e e4 3e 19 72 8e 9e ed 43 41 95 4a 36 8c ....>.r...CA.J6. 00:22:29.526 000003f0 77 52 93 9d d9 dc 49 b0 02 e1 bd 69 7a 9a 87 a8 wR....I....iz... 00:22:29.526 host pubkey: 00:22:29.526 00000000 c7 88 81 a7 55 43 a3 d9 af 24 ee 0f a4 01 6d 91 ....UC...$....m. 00:22:29.526 00000010 85 35 b9 2d c5 cb 24 8b 64 82 65 29 01 de 59 14 .5.-..$.d.e)..Y. 00:22:29.526 00000020 84 dc b6 42 05 d4 17 28 d1 cf dd e1 84 c0 94 bf ...B...(........ 00:22:29.526 00000030 15 17 74 12 4e 95 63 a5 e5 99 55 84 b5 31 f8 d6 ..t.N.c...U..1.. 00:22:29.526 00000040 ce 70 85 10 f2 76 a2 df 56 7e 5b 1b f5 ee 20 ec .p...v..V~[... . 00:22:29.526 00000050 2d bc a6 9a bc db df 6f 21 44 9c 15 49 9e 42 cb -......o!D..I.B. 00:22:29.526 00000060 75 21 60 77 5a 00 40 85 b3 4c d6 07 5d 95 78 3a u!`wZ.@..L..].x: 00:22:29.526 00000070 27 29 03 2c 92 d8 16 b5 4c 4c 64 4f 06 ff 69 27 ').,....LLdO..i' 00:22:29.526 00000080 f2 23 6a 51 3a ba 4e 5e 4b 71 4e 16 14 f5 c5 ee .#jQ:.N^KqN..... 00:22:29.526 00000090 1c f7 a5 59 c5 66 8d 9c c2 38 f8 ee f0 87 a2 e2 ...Y.f...8...... 00:22:29.526 000000a0 79 43 b2 b7 49 f2 d9 31 0f 77 1a 22 eb 75 30 2c yC..I..1.w.".u0, 00:22:29.526 000000b0 a4 d8 0a d6 f7 08 d9 fe 4e 38 fe db 24 42 7f f9 ........N8..$B.. 00:22:29.526 000000c0 d1 79 c1 d9 2d ce cb 14 96 fe 93 5f df 16 23 11 .y..-......_..#. 00:22:29.526 000000d0 1d 29 7e 43 b6 00 93 69 63 f0 b0 84 16 f2 c0 57 .)~C...ic......W 00:22:29.526 000000e0 ab d4 47 c8 99 5b 0a 0d d9 a3 2e fc 31 47 e2 bc ..G..[......1G.. 00:22:29.526 000000f0 66 32 c4 a5 01 ed c3 3f 69 38 36 5c c7 11 31 ab f2.....?i86\..1. 00:22:29.526 00000100 cc 05 c9 1f 76 2f fe 1a bf c1 aa b6 b8 d4 d1 10 ....v/.......... 00:22:29.526 00000110 a4 7e b3 12 a3 71 af a7 bd ab 9a c8 dd 10 27 18 .~...q........'. 00:22:29.526 00000120 71 e4 8e 03 77 0d 6e 84 d3 ea ae 29 84 c1 29 94 q...w.n....)..). 00:22:29.526 00000130 d9 4f 91 78 eb 41 e3 a9 cc 6a 50 9e b9 d3 74 90 .O.x.A...jP...t. 00:22:29.526 00000140 b9 23 5b 63 82 03 1f 7b 6d 45 ff 5a 8c a0 6f bd .#[c...{mE.Z..o. 00:22:29.526 00000150 d4 f5 6b fe 7a 65 83 6e 9c e7 a5 e7 2d 38 ee 4d ..k.ze.n....-8.M 00:22:29.526 00000160 9c b0 fc 06 50 55 e9 d6 36 56 6e 39 f4 16 b8 80 ....PU..6Vn9.... 00:22:29.526 00000170 be a8 84 43 5c e0 5e 14 90 1f 04 d5 fb 6f 86 d4 ...C\.^......o.. 00:22:29.526 00000180 e1 e6 9d fd 90 3c 34 55 41 03 34 2e 4d db ec 25 .....<4UA.4.M..% 00:22:29.526 00000190 b2 ca dd 52 df 8f 9b 91 59 b1 e3 3a f9 98 6a f4 ...R....Y..:..j. 00:22:29.526 000001a0 b7 d3 69 4c b0 72 57 40 73 3c 29 8a a9 b2 9e 08 ..iL.rW@s<)..... 00:22:29.526 000001b0 f8 6b 68 f9 3f 1c 18 a2 41 5c 46 6e 38 01 e1 5d .kh.?...A\Fn8..] 00:22:29.526 000001c0 0a bb 5c 7a 0b 0a 0f c9 1e 12 df 2c 75 a5 91 0f ..\z.......,u... 00:22:29.526 000001d0 40 df c9 89 27 86 9d 34 b4 bc 55 2e 8b 2e 4e cb @...'..4..U...N. 00:22:29.526 000001e0 e4 16 c8 a1 9d 58 d0 dc e9 83 b9 a9 52 1f 51 c4 .....X......R.Q. 00:22:29.526 000001f0 86 0c 2c b9 af 23 fa f0 bc e5 69 7b 64 fd dc 7a ..,..#....i{d..z 00:22:29.526 00000200 a7 37 b5 8e 37 c7 0f 16 c4 e6 de f0 33 4a df e1 .7..7.......3J.. 00:22:29.526 00000210 23 85 1c 3d 93 18 06 9f 46 46 26 45 5b a2 38 a4 #..=....FF&E[.8. 00:22:29.526 00000220 8e 8b c9 d5 7e 2e b6 99 9a 8f 34 69 e4 20 59 c3 ....~.....4i. Y. 00:22:29.526 00000230 d1 57 dc 45 3e dd ba a2 40 50 06 19 6d b2 6a 97 .W.E>...@P..m.j. 00:22:29.526 00000240 97 e6 53 a2 a0 b5 47 14 a6 23 28 2e a4 94 2a 2b ..S...G..#(...*+ 00:22:29.526 00000250 48 16 fb aa f1 4d 51 ad ec d1 8e 36 18 24 93 20 H....MQ....6.$. 00:22:29.526 00000260 e1 5c c7 21 6f d6 95 11 ce 4b 39 37 6e d4 13 ba .\.!o....K97n... 00:22:29.526 00000270 30 de 5f d2 67 5e 79 7e 92 30 bb 01 2e b4 db 60 0._.g^y~.0.....` 00:22:29.526 00000280 d7 b6 ec d9 1d c7 cf 92 56 47 44 c1 cd 67 38 e0 ........VGD..g8. 00:22:29.526 00000290 2d 31 4b c3 ec 19 45 d7 cc 5f 21 d4 61 f3 e3 e8 -1K...E.._!.a... 00:22:29.526 000002a0 42 e7 a2 49 aa 7d 7c 30 bd 9d c3 eb b5 86 3b 8d B..I.}|0......;. 00:22:29.526 000002b0 f2 42 7a 1e 74 45 e1 85 4d 2e 5a c0 53 b9 53 28 .Bz.tE..M.Z.S.S( 00:22:29.526 000002c0 a1 1f ff ce c2 85 7d 34 e1 71 ef 68 54 86 d3 18 ......}4.q.hT... 00:22:29.526 000002d0 24 c8 f7 66 ca 45 44 42 97 34 48 8a 68 7f c6 e3 $..f.EDB.4H.h... 00:22:29.526 000002e0 0a 2d 91 5a 69 de da ad f0 19 da cd bc 58 e6 f5 .-.Zi........X.. 00:22:29.526 000002f0 dd d3 45 e5 65 bb 3d 6b 01 4b 27 14 1f d4 00 b0 ..E.e.=k.K'..... 00:22:29.526 00000300 59 58 bc 25 e4 90 a7 17 df 36 cd 0b 61 74 ea dd YX.%.....6..at.. 00:22:29.526 00000310 f5 16 41 d4 80 5b 7f 70 6d 5e fc 5c 8e 1d 6f 22 ..A..[.pm^.\..o" 00:22:29.526 00000320 87 7a 22 3d b8 b5 1a 3e 15 94 fc c2 d7 8f 12 f3 .z"=...>........ 00:22:29.526 00000330 f1 32 fb 0a 18 98 44 57 fd a6 54 cb 83 47 ed 41 .2....DW..T..G.A 00:22:29.526 00000340 83 40 be cc 18 45 32 b1 1f 30 75 dd 4b 41 d6 dc .@...E2..0u.KA.. 00:22:29.526 00000350 55 66 2f e6 2e e0 7a 40 b7 2e 1d e0 9c 56 f8 8b Uf/...z@.....V.. 00:22:29.526 00000360 3d 61 88 12 95 f1 0c 57 ea a6 35 cc 4d 35 e2 0a =a.....W..5.M5.. 00:22:29.526 00000370 60 87 c6 d3 fb 2b c2 f7 7a 67 07 b1 83 60 65 25 `....+..zg...`e% 00:22:29.526 00000380 c8 72 21 91 dd 85 b0 10 e6 28 da 97 80 c5 84 a9 .r!......(...... 00:22:29.527 00000390 38 4d 5a 58 83 64 1b 19 b8 d7 3b 17 71 6d dc 90 8MZX.d....;.qm.. 00:22:29.527 000003a0 38 1e 3e af 54 07 2e d3 34 c8 f5 3a 4c d7 f6 b8 8.>.T...4..:L... 00:22:29.527 000003b0 c7 50 10 48 b2 de df b9 f4 1c f8 0d 55 4b ed 7c .P.H........UK.| 00:22:29.527 000003c0 43 ee e9 1c c7 21 5b 40 58 3a a2 25 e7 18 f1 df C....![@X:.%.... 00:22:29.527 000003d0 19 47 27 74 a4 17 db d1 ae bb 54 60 ff e6 53 3a .G't......T`..S: 00:22:29.527 000003e0 88 e2 07 0f b9 c9 6b 65 a5 f8 9d 7a e7 8d 6e 04 ......ke...z..n. 00:22:29.527 000003f0 b1 2a f9 2f cc 8f c0 c8 43 08 db ab 37 68 4d 10 .*./....C...7hM. 00:22:29.527 dh secret: 00:22:29.527 00000000 bd f7 2c fe 99 7b 00 11 0b 73 53 8f 9d a9 62 ed ..,..{...sS...b. 00:22:29.527 00000010 74 4b bf 9c bf a8 eb 63 d6 49 34 e3 ac c4 ef 11 tK.....c.I4..... 00:22:29.527 00000020 cb 4f 4e de c7 e4 57 fd 9f 27 aa 97 34 37 68 08 .ON...W..'..47h. 00:22:29.527 00000030 ca 35 cb c5 97 5c 84 d9 2f 7e 9c fe aa 6f 63 af .5...\../~...oc. 00:22:29.527 00000040 88 df bc f4 d2 8f ea 9e 07 3a ff d1 ed 48 1e a7 .........:...H.. 00:22:29.527 00000050 19 b5 7d 47 8c 44 e8 30 90 38 6f 4c 40 72 13 6e ..}G.D.0.8oL@r.n 00:22:29.527 00000060 f0 b7 17 ce db df 53 4a 71 8c 24 30 b8 7f 99 63 ......SJq.$0...c 00:22:29.527 00000070 6e e6 09 f4 4b 2c 0c 5b 9d d4 03 37 cd ef be e5 n...K,.[...7.... 00:22:29.527 00000080 32 c8 a0 fa 58 32 95 be 4f 5a f9 de 34 cd e0 43 2...X2..OZ..4..C 00:22:29.527 00000090 b3 07 2b 82 ab ed 7b d9 e3 7d a7 2f db 95 7c c9 ..+...{..}./..|. 00:22:29.527 000000a0 68 f9 18 70 b0 3c 85 89 b2 ce c6 de a1 25 34 0d h..p.<.......%4. 00:22:29.527 000000b0 45 09 7c 53 6e 1a 0c 4b 64 41 8f d4 7d e6 87 d8 E.|Sn..KdA..}... 00:22:29.527 000000c0 95 fb 11 35 94 69 a3 0a a0 9a 9f a3 6c b1 56 2a ...5.i......l.V* 00:22:29.527 000000d0 b4 7e a0 52 f6 2c f9 7b a7 96 62 d7 52 da 0c be .~.R.,.{..b.R... 00:22:29.527 000000e0 ec 28 02 ef 02 74 e4 02 71 93 d7 0d d2 1f d2 9d .(...t..q....... 00:22:29.527 000000f0 36 b6 b0 6f d4 b6 94 9e d9 b6 76 c8 0d 00 7c 80 6..o......v...|. 00:22:29.527 00000100 b4 52 60 33 26 5e c0 28 5e 66 f6 c6 dc 79 24 c2 .R`3&^.(^f...y$. 00:22:29.527 00000110 f5 b4 b0 cf f1 c9 b5 c1 55 6a 63 dd 80 f1 74 a4 ........Ujc...t. 00:22:29.527 00000120 ea 98 5c 85 60 8f 5c 13 0b e1 e1 24 f7 db 30 77 ..\.`.\....$..0w 00:22:29.527 00000130 ac a8 ce db 9a 21 25 7c 22 fa 12 c2 94 fd df fd .....!%|"....... 00:22:29.527 00000140 5a b9 b7 19 c5 e1 f6 4b f3 8e a4 a3 f8 b2 55 0e Z......K......U. 00:22:29.527 00000150 09 91 53 fd 0c 5a 20 10 7e 29 7a 31 d3 59 87 3a ..S..Z .~)z1.Y.: 00:22:29.527 00000160 df f2 f5 09 a5 86 3e 87 30 a8 dc 43 c3 5b f0 06 ......>.0..C.[.. 00:22:29.527 00000170 bd 60 b8 0e b3 61 9f a2 07 14 0d 9a 15 db b4 47 .`...a.........G 00:22:29.527 00000180 b3 ee 6b 5b 63 24 be 3d 5c c0 13 da 48 5b ab ba ..k[c$.=\...H[.. 00:22:29.527 00000190 fa 3f 93 8e f3 16 55 a1 12 50 57 e3 ea 40 99 c4 .?....U..PW..@.. 00:22:29.527 000001a0 21 fe 7f 6d 02 b6 9c 35 c5 37 16 f9 fa d6 75 5c !..m...5.7....u\ 00:22:29.527 000001b0 ed fd b6 06 26 ba 61 3c 8d c7 d8 81 96 7b 11 f8 ....&.a<.....{.. 00:22:29.527 000001c0 84 01 0b 37 40 33 57 4b 37 7f 5f e8 47 d0 c9 22 ...7@3WK7._.G.." 00:22:29.527 000001d0 5b a7 eb 10 ce e6 45 1a f5 77 b9 4b 7d 3a d8 97 [.....E..w.K}:.. 00:22:29.527 000001e0 01 45 e0 42 b6 f8 26 b1 9b c6 43 6c b4 79 a8 8f .E.B..&...Cl.y.. 00:22:29.527 000001f0 c2 0e 2a 4b 53 19 87 28 c0 6d be 92 ff cb 87 39 ..*KS..(.m.....9 00:22:29.527 00000200 77 16 8b fb 4f 21 34 94 05 89 2c da 96 b0 55 61 w...O!4...,...Ua 00:22:29.527 00000210 72 4d bc a1 a1 b6 29 b2 8a b6 f2 7a bb 49 4a d6 rM....)....z.IJ. 00:22:29.527 00000220 9b 0e 91 90 a6 de af 6d b6 e2 9b a5 83 72 3d e8 .......m.....r=. 00:22:29.527 00000230 2c f6 f3 0b ca 0b 7c 9a 1b 58 fe 8d 87 d5 72 8b ,.....|..X....r. 00:22:29.527 00000240 8d d6 4f 68 0d 68 b1 b4 39 2e 71 20 72 db 18 06 ..Oh.h..9.q r... 00:22:29.527 00000250 3f 9a cf 8f b8 c3 b4 52 49 0e 86 ab 3a e2 ab 41 ?......RI...:..A 00:22:29.527 00000260 76 a8 1f 08 13 f6 9d ae 8f 3c 2b 45 aa 57 1e ad v........<+E.W.. 00:22:29.527 00000270 ee 6e 42 c2 2b cd d5 ab b3 02 78 e3 06 b6 0c 68 .nB.+.....x....h 00:22:29.527 00000280 d0 f2 c4 60 25 48 fd 42 b0 f2 d5 5f b8 2c be 56 ...`%H.B..._.,.V 00:22:29.527 00000290 fe ec fd df 2b c8 89 13 6f 9f ac ee 31 41 23 eb ....+...o...1A#. 00:22:29.527 000002a0 fc bb 55 7b 0f 0d f0 27 df 49 9e 53 6e 98 20 b9 ..U{...'.I.Sn. . 00:22:29.527 000002b0 0e f8 05 00 99 39 ef 14 75 29 d3 2c ea af 32 7e .....9..u).,..2~ 00:22:29.527 000002c0 3a 4a 5a 94 73 24 69 6c 6d 11 c9 1e fa 12 42 6a :JZ.s$ilm.....Bj 00:22:29.527 000002d0 d3 9f 96 76 ab ea a6 bb 26 c8 85 da 11 fb 1d f0 ...v....&....... 00:22:29.527 000002e0 d4 a8 d7 2d a4 3e 8a 02 67 6a 85 bf 52 df e4 5f ...-.>..gj..R.._ 00:22:29.527 000002f0 d7 c7 36 a8 92 0e a4 ae 34 c7 55 6f 13 ff 79 6c ..6.....4.Uo..yl 00:22:29.527 00000300 93 6d 14 b7 70 ce cd 45 12 c8 c3 21 58 6c 44 f2 .m..p..E...!XlD. 00:22:29.527 00000310 1c 79 a1 38 12 5b 60 d2 72 b8 ee 09 1e 24 80 29 .y.8.[`.r....$.) 00:22:29.527 00000320 8d 6e 99 fb 51 4d f6 a1 b0 c7 03 13 14 8c 9e 02 .n..QM.......... 00:22:29.527 00000330 55 97 50 46 27 8b 2c c0 33 58 e5 aa 13 3d 5a fe U.PF'.,.3X...=Z. 00:22:29.527 00000340 82 e0 cc 00 e0 3e 8d 27 7f 75 1b 54 ec 32 16 ff .....>.'.u.T.2.. 00:22:29.527 00000350 31 23 75 25 f9 ea d7 0d 87 0a 19 79 e9 f1 0b 28 1#u%.......y...( 00:22:29.527 00000360 08 70 68 88 43 c0 21 34 15 d7 89 bc 3b 4c ef 78 .ph.C.!4....;L.x 00:22:29.527 00000370 7f 32 92 44 e9 5f 81 fa 76 90 d3 06 63 5a 79 30 .2.D._..v...cZy0 00:22:29.527 00000380 fd 77 c5 6a fd ca dc 30 80 a2 d8 7b 4b a5 dd ad .w.j...0...{K... 00:22:29.527 00000390 13 0b 87 d2 c0 ed 84 7e 43 39 aa ae 16 45 1d 95 .......~C9...E.. 00:22:29.527 000003a0 7c a8 d2 1a 47 72 4b f1 8c c5 bd 87 b1 3b 44 91 |...GrK......;D. 00:22:29.527 000003b0 77 74 fa 1c 77 ce 2b 00 15 af 4c cd e4 e2 b9 cd wt..w.+...L..... 00:22:29.527 000003c0 f7 d6 d1 ea 4f 02 7c 06 64 23 7f d5 e0 5c f9 81 ....O.|.d#...\.. 00:22:29.527 000003d0 3a f6 7a cf 78 3b ea df 58 32 b8 91 2b 45 e4 6f :.z.x;..X2..+E.o 00:22:29.527 000003e0 45 d0 c1 30 35 71 c5 1d 97 c0 0d 0a 4b 6b 8c 3a E..05q......Kk.: 00:22:29.527 000003f0 0e 5a c9 f8 4e 78 ad 41 d8 0d 10 c6 e8 86 4a 58 .Z..Nx.A......JX 00:22:29.527 [2024-09-27 13:27:20.499364] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key3, hash=2, dhgroup=5, seq=3775755270, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.527 [2024-09-27 13:27:20.499731] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.527 [2024-09-27 13:27:20.587898] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.527 [2024-09-27 13:27:20.588334] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.527 [2024-09-27 13:27:20.588601] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.527 [2024-09-27 13:27:20.588870] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.527 [2024-09-27 13:27:20.742302] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.527 [2024-09-27 13:27:20.742468] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 2 (sha384) 00:22:29.527 [2024-09-27 13:27:20.742806] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 5 (ffdhe8192) 00:22:29.527 [2024-09-27 13:27:20.742965] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.527 [2024-09-27 13:27:20.743207] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.527 ctrlr pubkey: 00:22:29.527 00000000 3e 24 21 6a c2 5c 9b 28 e2 21 ca b2 3e 4f be 64 >$!j.\.(.!..>O.d 00:22:29.527 00000010 98 06 40 e3 26 e5 ab 79 92 9a 8c e0 dd 06 ad d7 ..@.&..y........ 00:22:29.527 00000020 f2 03 c2 51 76 4d e2 67 65 8a 1f 28 6c e6 2c db ...QvM.ge..(l.,. 00:22:29.527 00000030 0e 73 18 7e 1b 84 cc d6 93 77 40 6d 41 5a 43 02 .s.~.....w@mAZC. 00:22:29.527 00000040 c5 f8 d8 e7 39 59 31 0c e1 f0 d5 0b 16 d2 81 5c ....9Y1........\ 00:22:29.527 00000050 90 31 32 4a d3 6c 0d cc 76 ff fb 6e af c9 8f 25 .12J.l..v..n...% 00:22:29.527 00000060 69 66 ab 07 44 f1 6a 3d f3 a6 57 02 58 76 7c bb if..D.j=..W.Xv|. 00:22:29.527 00000070 f7 21 de fa 62 99 ec 4f 4d b5 c3 5d 99 1c c6 e8 .!..b..OM..].... 00:22:29.527 00000080 b9 a6 43 88 dd c9 76 f4 58 0d 7e a2 4c 4c 2d a9 ..C...v.X.~.LL-. 00:22:29.527 00000090 63 e7 ad f0 24 a0 0f 05 73 53 80 10 b2 0d 7e 85 c...$...sS....~. 00:22:29.527 000000a0 0b 4d 74 85 58 c9 99 04 8b 7c ae af 94 d8 14 46 .Mt.X....|.....F 00:22:29.527 000000b0 ab 39 8b 23 76 1b 4b 34 d0 59 30 63 02 42 1f f0 .9.#v.K4.Y0c.B.. 00:22:29.527 000000c0 0d 00 7d af ea 26 ad 36 74 e1 d7 60 6c 6a 9a 63 ..}..&.6t..`lj.c 00:22:29.527 000000d0 a4 2d 04 22 18 0e 64 40 35 5d c0 0b ae 70 6e 32 .-."..d@5]...pn2 00:22:29.527 000000e0 6c 9b 04 ba a4 64 74 fc 44 a2 fb 4f 3c d8 30 a4 l....dt.D..O<.0. 00:22:29.527 000000f0 ef 95 96 58 b0 5e 68 b2 42 5a b6 77 73 14 64 e2 ...X.^h.BZ.ws.d. 00:22:29.527 00000100 79 76 f3 1c bb 49 fb 62 47 09 4d 21 28 d7 7f 5c yv...I.bG.M!(..\ 00:22:29.527 00000110 9c 7b 8b 74 ac 1e a2 a0 5d e1 9d 74 67 da 9c 49 .{.t....]..tg..I 00:22:29.527 00000120 ee 6e c2 b8 f8 1a 40 80 b6 9d a5 46 7a 02 4e 3b .n....@....Fz.N; 00:22:29.527 00000130 71 cc ce 94 65 c4 b5 7a b3 1a 64 1c 77 ef 84 e9 q...e..z..d.w... 00:22:29.527 00000140 cc b3 79 04 0b 00 b2 f1 96 f5 fe 98 b8 4d 67 85 ..y..........Mg. 00:22:29.527 00000150 98 67 23 3d bc d9 41 df f0 32 e4 2a 9b d6 0f 25 .g#=..A..2.*...% 00:22:29.527 00000160 45 62 94 a1 8c 69 3a e0 db 76 5c 7f d9 df c0 d8 Eb...i:..v\..... 00:22:29.527 00000170 c3 76 86 a3 68 d2 49 9c 1e f4 15 7b 62 ae 4e 04 .v..h.I....{b.N. 00:22:29.527 00000180 68 3f c0 26 02 ef 73 8c 86 ab f5 f2 2c 74 99 79 h?.&..s.....,t.y 00:22:29.527 00000190 5e 54 ff c9 46 1d f0 99 c7 fa 9d b4 40 16 5c 5f ^T..F.......@.\_ 00:22:29.527 000001a0 55 d4 7f 28 5f 13 66 37 c4 53 93 29 21 e5 04 02 U..(_.f7.S.)!... 00:22:29.527 000001b0 42 92 4a a6 f9 b2 46 34 63 6c f4 b3 b4 e5 2d d6 B.J...F4cl....-. 00:22:29.527 000001c0 a0 e4 0c 80 ce 29 e1 4c 35 92 e8 d6 86 fe c8 7f .....).L5....... 00:22:29.527 000001d0 ff d0 58 b7 31 f8 d4 c9 a7 54 16 0c c7 67 37 34 ..X.1....T...g74 00:22:29.527 000001e0 a2 6b d6 d7 4b fe 5e 95 3b 5c 7b 95 00 51 b0 c0 .k..K.^.;\{..Q.. 00:22:29.527 000001f0 2d 42 63 92 c7 bd 71 ec f7 1f 1e e3 3a 07 0c d1 -Bc...q.....:... 00:22:29.527 00000200 5b 9c 56 ad 03 ea 67 ec 27 be 0e 67 6b 4e a8 30 [.V...g.'..gkN.0 00:22:29.527 00000210 44 40 17 52 eb 03 4a 12 3c c6 a8 78 27 9b b4 cc D@.R..J.<..x'... 00:22:29.527 00000220 fc 70 25 3f a9 a4 f2 eb 75 ec 41 e3 6b 82 39 d8 .p%?....u.A.k.9. 00:22:29.527 00000230 0f 52 5b cf 57 52 1b 3d 07 fc ca ac 1e 72 2d 3d .R[.WR.=.....r-= 00:22:29.527 00000240 5e ee 10 cf c8 f9 41 23 a8 b0 c4 91 13 fc 71 e2 ^.....A#......q. 00:22:29.527 00000250 e8 30 50 1d e6 22 c3 e2 a4 4a 0f a4 5f 99 a3 4b .0P.."...J.._..K 00:22:29.527 00000260 df 0b 2f 5c 12 6d dc eb 12 e6 89 c0 7c 83 38 e6 ../\.m......|.8. 00:22:29.527 00000270 70 4d c8 47 e6 f2 5d 28 25 2f 0a c9 0e 8c 61 97 pM.G..](%/....a. 00:22:29.527 00000280 50 0e b0 68 14 02 43 54 19 a5 61 14 8e a9 14 68 P..h..CT..a....h 00:22:29.527 00000290 b6 1b ff 6e e4 d3 5d 52 c4 74 74 2e 3a 67 03 e4 ...n..]R.tt.:g.. 00:22:29.527 000002a0 4f 5e 2b d4 d1 23 6e 08 8a 24 a3 63 75 10 8e f9 O^+..#n..$.cu... 00:22:29.527 000002b0 88 d8 4f f1 a1 cf 7e fa 20 34 e4 3c 6d 96 c0 17 ..O...~. 4.. 00:22:29.527 000003c0 2d 69 ca 4c c7 f2 0f 80 39 8d c3 d4 8e b6 e9 73 -i.L....9......s 00:22:29.528 000003d0 a9 c9 67 84 82 fd c8 51 ba ca d7 0e 68 10 9a 2e ..g....Q....h... 00:22:29.528 000003e0 d9 30 4a 23 bb b7 4d 72 98 9e 67 c1 a1 93 aa 21 .0J#..Mr..g....! 00:22:29.528 000003f0 dd eb 4d fe 0b 97 2e 6b 12 23 f9 64 9f ef ca a0 ..M....k.#.d.... 00:22:29.528 host pubkey: 00:22:29.528 00000000 5d 62 9f 80 fa cd 35 b3 1c b1 07 fc bd 88 d2 b1 ]b....5......... 00:22:29.528 00000010 38 2d 8f d5 73 8b c6 8e 49 98 a6 09 e9 ab 6c 91 8-..s...I.....l. 00:22:29.528 00000020 c4 54 cc 71 ee 8d fa c6 0c 07 bd 80 bd c2 ca c4 .T.q............ 00:22:29.528 00000030 4b 28 65 f8 77 c6 ff 7a 04 f2 45 21 73 75 c4 91 K(e.w..z..E!su.. 00:22:29.528 00000040 31 dc 5c df 18 ee 71 35 4e 9f 33 66 d4 92 70 29 1.\...q5N.3f..p) 00:22:29.528 00000050 13 5d 8b 68 a7 f7 27 de eb ab 4f bb 27 fb 99 03 .].h..'...O.'... 00:22:29.528 00000060 61 24 a2 3d 5b 01 a4 1e 00 c2 71 ff 63 30 1f c4 a$.=[.....q.c0.. 00:22:29.528 00000070 90 54 e7 3d 99 34 bb f4 6c 55 98 95 85 37 26 a4 .T.=.4..lU...7&. 00:22:29.528 00000080 00 2c 1a 6f 10 73 54 d1 f8 8d 97 5b c7 33 bf a8 .,.o.sT....[.3.. 00:22:29.528 00000090 4e 19 2f a5 db da dc 8d 2a e5 eb 9d e3 7a 96 6b N./.....*....z.k 00:22:29.528 000000a0 12 4b 46 3a 15 a0 41 13 86 60 ed 9d e7 55 f8 3a .KF:..A..`...U.: 00:22:29.528 000000b0 3f f4 24 3b 31 35 c4 41 6c d9 d5 09 1f a0 4e 79 ?.$;15.Al.....Ny 00:22:29.528 000000c0 51 09 70 99 44 65 ac 9d 9f 2b 35 87 89 80 2b 3d Q.p.De...+5...+= 00:22:29.528 000000d0 9a e5 dc 54 5b cd a4 10 7e f3 4c a1 c2 f5 23 24 ...T[...~.L...#$ 00:22:29.528 000000e0 50 71 c1 60 1e 39 1f b8 bb ab b0 1d 6f 4d c0 b5 Pq.`.9......oM.. 00:22:29.528 000000f0 9d a4 6b de 35 e8 0a dc 6a 12 c4 8c 65 73 51 b6 ..k.5...j...esQ. 00:22:29.528 00000100 61 c3 94 1a 7d 47 ad 68 51 59 1d 4b 65 1d 6a 34 a...}G.hQY.Ke.j4 00:22:29.528 00000110 96 bd 26 16 ef 34 e2 ce 1b 08 1c 5f 9d c3 21 35 ..&..4....._..!5 00:22:29.528 00000120 46 20 5b 47 e6 db 2d e2 ad 52 16 a5 bc 02 d7 d5 F [G..-..R...... 00:22:29.528 00000130 75 dd c8 dc b2 1a 65 3f 6c bb 13 c8 fa f7 5f 9b u.....e?l....._. 00:22:29.528 00000140 62 7f 42 50 c0 e6 39 72 6d b3 4c ba 57 09 92 f1 b.BP..9rm.L.W... 00:22:29.528 00000150 27 93 84 f5 84 d6 03 9f 6c 5a 21 a2 78 dd 9a 93 '.......lZ!.x... 00:22:29.528 00000160 4c 5d a5 2c e9 d3 e1 ec 1d 4c 51 a3 6b 53 18 a4 L].,.....LQ.kS.. 00:22:29.528 00000170 3d 2e 76 f8 a4 eb 87 89 68 c0 ec fa 5a 1d 05 a8 =.v.....h...Z... 00:22:29.528 00000180 a6 a3 23 b7 52 5a 53 e6 a7 f7 3b 2f ca a9 cb 06 ..#.RZS...;/.... 00:22:29.528 00000190 02 e6 0e f8 9e a2 e8 40 a8 ba b3 b4 2d 79 29 be .......@....-y). 00:22:29.528 000001a0 f8 03 27 de 18 06 4a 6e 97 35 f7 de 50 53 a8 89 ..'...Jn.5..PS.. 00:22:29.528 000001b0 14 31 0f 6c ef 66 43 f6 bf 02 12 7f 89 14 2a 8d .1.l.fC.......*. 00:22:29.528 000001c0 83 42 57 89 61 e6 08 3a 41 df f8 fe 9e 41 b5 b9 .BW.a..:A....A.. 00:22:29.528 000001d0 15 d1 24 18 96 d6 ba 26 1c 8c 24 99 29 9c 56 46 ..$....&..$.).VF 00:22:29.528 000001e0 42 69 d7 09 9e 36 4a 4b 82 95 e9 f9 3e b2 7f 2a Bi...6JK....>..* 00:22:29.528 000001f0 15 34 e8 b4 8a 07 b2 8a dc 6f a4 30 cc ae f9 4f .4.......o.0...O 00:22:29.528 00000200 8e 12 dc 02 de eb 58 17 54 80 15 19 44 52 80 4c ......X.T...DR.L 00:22:29.528 00000210 99 1d 2f ca db 79 42 fd 2f ae af 03 ee eb 2d 27 ../..yB./.....-' 00:22:29.528 00000220 91 4a 66 90 40 0e 2f 84 b2 59 84 fa ae ad 14 cf .Jf.@./..Y...... 00:22:29.528 00000230 fc 61 61 05 dc c2 41 25 f8 0f aa a1 7a 9c 18 0a .aa...A%....z... 00:22:29.528 00000240 7c 51 01 f5 d0 46 0f 2d 98 62 bd 19 b9 2b b7 a2 |Q...F.-.b...+.. 00:22:29.528 00000250 97 2f 5e d3 00 24 6f e4 5d c7 e6 e6 63 49 47 1c ./^..$o.]...cIG. 00:22:29.528 00000260 7a 0f cf f5 58 cc 2f c8 4f da ca 0b 90 1b 2d 74 z...X./.O.....-t 00:22:29.528 00000270 52 2d 15 45 3b 71 25 75 fe 9c ef 94 45 88 ac d6 R-.E;q%u....E... 00:22:29.528 00000280 a4 a1 e1 98 60 63 38 ce db 9a d3 44 ca 0c ef e0 ....`c8....D.... 00:22:29.528 00000290 3d a0 86 58 92 2d 40 0e d2 e9 c4 75 d4 69 4a 41 =..X.-@....u.iJA 00:22:29.528 000002a0 e4 7b 94 14 77 14 0f 2a 62 9e 2a 61 16 e9 c0 3c .{..w..*b.*a...< 00:22:29.528 000002b0 ca 86 0b 52 cc 5e f3 5b cc 27 d8 c7 71 ac 7f c4 ...R.^.[.'..q... 00:22:29.528 000002c0 ae 7b 49 9a 81 60 e6 04 96 f9 fa c5 92 e4 30 8e .{I..`........0. 00:22:29.528 000002d0 57 a4 09 39 b2 6c 1e 2d bb 15 8a 3f 89 32 77 5a W..9.l.-...?.2wZ 00:22:29.528 000002e0 12 cf ea 33 ea 2d fa 80 c7 4f cf e9 aa 64 ca 05 ...3.-...O...d.. 00:22:29.528 000002f0 1b 98 73 e1 e7 35 30 ba f7 37 42 11 16 94 a8 b0 ..s..50..7B..... 00:22:29.528 00000300 eb 85 0c 05 57 61 c0 dc 1e b6 ac f6 1a a5 86 8e ....Wa.......... 00:22:29.528 00000310 4a b5 bf 2c fd e9 fe d4 e7 d0 8f 07 75 1f 1a a7 J..,........u... 00:22:29.528 00000320 63 f2 5b c3 af c0 8a 58 65 e2 96 5f c3 9b 9f 85 c.[....Xe.._.... 00:22:29.528 00000330 db 23 b4 e6 6e 03 d9 b9 ec 31 38 37 17 40 fb b6 .#..n....187.@.. 00:22:29.528 00000340 06 08 c2 41 09 3c 7a de 65 97 da e7 3e 3b 7a 1a ...A.;z. 00:22:29.528 00000350 fc 68 ec 38 0e 8e a6 ed 66 4e 49 07 a1 8d cf 48 .h.8....fNI....H 00:22:29.528 00000360 bd b9 68 80 7a c5 af b6 4b fe 37 63 7f f5 2e 04 ..h.z...K.7c.... 00:22:29.528 00000370 1f a9 19 d0 e0 9a b3 cc fb a9 27 cd 07 43 c1 fc ..........'..C.. 00:22:29.528 00000380 3d b0 f7 2b d7 7e f1 f7 74 64 7b b7 d5 52 ac 5e =..+.~..td{..R.^ 00:22:29.528 00000390 62 f4 8e 90 08 00 b8 e1 68 ce 52 26 5f a3 81 fe b.......h.R&_... 00:22:29.528 000003a0 48 f5 54 b6 23 bb 02 7e a4 64 aa c3 c8 df c6 41 H.T.#..~.d.....A 00:22:29.528 000003b0 e9 6a 2b 4d 99 24 a3 a1 6b 6b 29 2e 83 01 5b 64 .j+M.$..kk)...[d 00:22:29.528 000003c0 33 fd 40 74 d1 ed 5a b3 e3 3a 87 c6 eb d8 96 f1 3.@t..Z..:...... 00:22:29.528 000003d0 3a 96 6a 62 a6 fd b6 c4 2f ff 96 0d 41 06 4f 9f :.jb..../...A.O. 00:22:29.528 000003e0 af 37 29 2c 27 1f 34 c3 e9 a8 24 27 f5 88 56 2b .7),'.4...$'..V+ 00:22:29.528 000003f0 8a 8d 4e cf 77 b9 6c a6 41 aa 48 c5 99 ca 8f 1f ..N.w.l.A.H..... 00:22:29.528 dh secret: 00:22:29.528 00000000 e6 d3 bd 27 43 d5 09 23 9e 2f d2 3e 51 62 3e 05 ...'C..#./.>Qb>. 00:22:29.528 00000010 49 d9 b2 9b e0 42 18 10 59 26 05 2c 1d 9e b5 7e I....B..Y&.,...~ 00:22:29.528 00000020 33 f1 cf ec a4 74 2f 73 f7 0d cb 81 12 09 a2 c4 3....t/s........ 00:22:29.528 00000030 98 ec 08 51 2f 05 b6 49 a8 e2 3d eb d4 3a ae fe ...Q/..I..=..:.. 00:22:29.528 00000040 00 ea 32 82 cf 70 d9 4e 59 d1 ba 91 dc b1 79 eb ..2..p.NY.....y. 00:22:29.528 00000050 a4 1e 4b 60 8f 2e 92 58 57 f0 c0 32 1f 6d 80 c8 ..K`...XW..2.m.. 00:22:29.528 00000060 3e 64 d9 3a 50 15 0e be e2 41 1b 0a d5 14 af 36 >d.:P....A.....6 00:22:29.528 00000070 94 d9 b0 03 dc 5b b3 da 93 e1 17 dd fc c1 f2 cf .....[.......... 00:22:29.528 00000080 fb 5e 43 d6 7b 81 38 27 40 85 c8 04 cd c2 21 27 .^C.{.8'@.....!' 00:22:29.528 00000090 b4 85 70 93 ce 15 f2 61 d0 2a 6f 14 c3 88 7b b9 ..p....a.*o...{. 00:22:29.528 000000a0 29 ec 10 6e 8e 90 74 51 98 2d c1 c7 85 08 a8 20 )..n..tQ.-..... 00:22:29.528 000000b0 a6 dc fe e0 0d cd 22 07 34 00 f8 8b 5a a4 0b 02 ......".4...Z... 00:22:29.528 000000c0 2d 98 b3 d6 72 bf e2 24 8d 65 7e 07 7e 36 72 00 -...r..$.e~.~6r. 00:22:29.528 000000d0 49 9c c9 89 7c 08 ed 74 c7 dd da 72 85 38 ea 65 I...|..t...r.8.e 00:22:29.528 000000e0 09 8e fc 4d c4 c8 a4 c6 68 23 fd 84 ac b7 fe 11 ...M....h#...... 00:22:29.528 000000f0 77 b5 25 04 e7 cc c4 7c 95 fe 4c 58 62 5f 62 41 w.%....|..LXb_bA 00:22:29.528 00000100 52 d4 5c 5f 23 71 2e 08 83 aa 61 d7 b0 90 c2 e9 R.\_#q....a..... 00:22:29.528 00000110 70 61 e8 c9 b2 bd f4 e1 4c b0 21 b7 88 8a be 12 pa......L.!..... 00:22:29.528 00000120 12 cc e5 c0 84 78 84 d0 b0 39 0a 64 59 b0 75 66 .....x...9.dY.uf 00:22:29.528 00000130 d0 55 2e 1b 39 6e 36 26 c1 dd bf 03 26 57 6b da .U..9n6&....&Wk. 00:22:29.528 00000140 84 98 7b 73 b8 da c7 b6 82 f4 4d 93 ab cf a4 44 ..{s......M....D 00:22:29.528 00000150 60 2c c2 13 c5 03 24 ef a3 5b ec 46 cf 8f 9b 29 `,....$..[.F...) 00:22:29.528 00000160 a3 38 bb ec 6a 47 65 e8 e9 42 69 4e c8 6c 5e 44 .8..jGe..BiN.l^D 00:22:29.528 00000170 6b 3a 47 c3 b9 ce b5 5f ed 73 2b f4 af 0d d9 33 k:G...._.s+....3 00:22:29.528 00000180 94 da 17 0a 26 a6 5e 21 cb 4d 2e fd c2 f7 bf 67 ....&.^!.M.....g 00:22:29.528 00000190 af f9 5b 97 87 d9 a4 1b 4c e7 59 28 7d 32 58 f1 ..[.....L.Y(}2X. 00:22:29.528 000001a0 3a a2 f7 a1 02 84 d0 41 26 3c 9b 81 5f 0a ff bb :......A&<.._... 00:22:29.528 000001b0 1f b5 72 a7 7e 76 f8 c7 44 6c a0 4d 3c 2f dd 90 ..r.~v..Dl.M.iz...q, 00:22:29.528 00000210 da 14 75 b7 e9 d8 52 d9 2c 8f 42 66 57 00 fd c1 ..u...R.,.BfW... 00:22:29.528 00000220 79 bd ff 64 55 ac db 3f 2c 78 82 09 55 3a 18 42 y..dU..?,x..U:.B 00:22:29.528 00000230 b8 20 a6 0a d0 4b b7 ba 4b a6 e3 48 77 ba 24 6e . ...K..K..Hw.$n 00:22:29.528 00000240 45 53 29 15 23 5b ac e5 3f 3d 4d 15 97 9b 5e 31 ES).#[..?=M...^1 00:22:29.528 00000250 20 0f 6e 35 bd f4 dd 4a bc c7 97 d5 8d 8b 0b 41 .n5...J.......A 00:22:29.528 00000260 0c 71 1c 83 8f cb 71 8d 43 a2 c9 07 dd 60 01 d1 .q....q.C....`.. 00:22:29.528 00000270 9a be 27 da 32 1b 2a b6 aa 65 78 56 75 7b 54 71 ..'.2.*..exVu{Tq 00:22:29.528 00000280 4b 7d 70 bc 56 a0 81 23 b9 dd 50 cd 8c 36 aa 65 K}p.V..#..P..6.e 00:22:29.528 00000290 78 45 40 17 24 ff 06 a0 b8 98 4e ae 20 9e f9 02 xE@.$.....N. ... 00:22:29.528 000002a0 f9 1a 94 7e d8 67 d7 3a 52 e4 25 10 05 ec dd 5d ...~.g.:R.%....] 00:22:29.528 000002b0 c6 a9 ec 07 7d 34 e7 e9 18 96 22 79 66 fd d5 18 ....}4...."yf... 00:22:29.528 000002c0 f6 51 b5 b1 13 44 06 c9 34 45 42 fe 03 64 76 c2 .Q...D..4EB..dv. 00:22:29.528 000002d0 09 4e d6 64 de 3d 4c 53 94 ab fc 6f 7a 25 28 0b .N.d.=LS...oz%(. 00:22:29.528 000002e0 5a 8f 46 9f 78 6a 05 a4 54 bf 97 d5 ff d0 6e f5 Z.F.xj..T.....n. 00:22:29.528 000002f0 96 d3 6b 57 0e 84 7b bd 81 ab 68 73 63 1c 80 aa ..kW..{...hsc... 00:22:29.528 00000300 50 87 89 a6 b2 13 e7 3d 7c af c1 98 0a b4 64 84 P......=|.....d. 00:22:29.528 00000310 1f 1b 7e 3e a3 39 9d 20 bb 1b 55 16 cb 7d af 41 ..~>.9. ..U..}.A 00:22:29.528 00000320 55 4d b2 7c b9 4d 13 e2 c7 0d 3f 4b fd 56 0c 04 UM.|.M....?K.V.. 00:22:29.528 00000330 20 cb 79 02 91 df 9e a7 a6 c1 b7 76 5e 6a 03 e3 .y........v^j.. 00:22:29.528 00000340 3a 03 78 41 32 88 35 37 3a 82 34 8c a4 56 3c c0 :.xA2.57:.4..V<. 00:22:29.528 00000350 54 ce f5 8c fd 42 c4 3c ad 79 82 90 f1 07 33 53 T....B.<.y....3S 00:22:29.528 00000360 da 83 60 6d de 6c 51 d9 3b 36 9e 30 26 ed 7d 57 ..`m.lQ.;6.0&.}W 00:22:29.528 00000370 e7 1a 63 fb 3d 08 63 12 c9 c9 82 cb 82 11 77 68 ..c.=.c.......wh 00:22:29.528 00000380 83 7d 80 98 2b 73 96 c0 84 44 97 15 96 d5 14 62 .}..+s...D.....b 00:22:29.528 00000390 20 86 b3 c6 bf 7d b2 a5 08 18 60 2b 14 2a 6c d3 ....}....`+.*l. 00:22:29.528 000003a0 f6 99 9f bf 4c 60 52 f3 dd 2b 34 45 90 6a e9 af ....L`R..+4E.j.. 00:22:29.528 000003b0 c8 d2 61 07 b8 52 e2 9b f5 0e 82 f2 d9 40 b3 78 ..a..R.......@.x 00:22:29.528 000003c0 fe bb c7 cd 78 61 3f c3 23 98 1c f9 44 17 66 42 ....xa?.#...D.fB 00:22:29.528 000003d0 ae cc 98 72 e2 71 87 2a 88 01 5c 38 60 de 8f ab ...r.q.*..\8`... 00:22:29.528 000003e0 98 69 96 61 26 07 e1 e5 54 3c 9c 1a b7 b3 03 46 .i.a&...T<.....F 00:22:29.528 000003f0 f1 a7 5f 08 2f 08 15 12 83 21 b8 a1 01 49 0e 9c .._./....!...I.. 00:22:29.528 [2024-09-27 13:27:20.911701] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key4, hash=2, dhgroup=5, seq=3775755271, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.528 [2024-09-27 13:27:20.912058] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.528 [2024-09-27 13:27:21.000326] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.528 [2024-09-27 13:27:21.000751] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.528 [2024-09-27 13:27:21.000988] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.528 [2024-09-27 13:27:21.051894] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.528 [2024-09-27 13:27:21.052068] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 2 (sha384) 00:22:29.528 [2024-09-27 13:27:21.052277] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 5 (ffdhe8192) 00:22:29.528 [2024-09-27 13:27:21.052403] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.528 [2024-09-27 13:27:21.052649] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.528 ctrlr pubkey: 00:22:29.528 00000000 3e 24 21 6a c2 5c 9b 28 e2 21 ca b2 3e 4f be 64 >$!j.\.(.!..>O.d 00:22:29.528 00000010 98 06 40 e3 26 e5 ab 79 92 9a 8c e0 dd 06 ad d7 ..@.&..y........ 00:22:29.528 00000020 f2 03 c2 51 76 4d e2 67 65 8a 1f 28 6c e6 2c db ...QvM.ge..(l.,. 00:22:29.528 00000030 0e 73 18 7e 1b 84 cc d6 93 77 40 6d 41 5a 43 02 .s.~.....w@mAZC. 00:22:29.528 00000040 c5 f8 d8 e7 39 59 31 0c e1 f0 d5 0b 16 d2 81 5c ....9Y1........\ 00:22:29.528 00000050 90 31 32 4a d3 6c 0d cc 76 ff fb 6e af c9 8f 25 .12J.l..v..n...% 00:22:29.528 00000060 69 66 ab 07 44 f1 6a 3d f3 a6 57 02 58 76 7c bb if..D.j=..W.Xv|. 00:22:29.528 00000070 f7 21 de fa 62 99 ec 4f 4d b5 c3 5d 99 1c c6 e8 .!..b..OM..].... 00:22:29.528 00000080 b9 a6 43 88 dd c9 76 f4 58 0d 7e a2 4c 4c 2d a9 ..C...v.X.~.LL-. 00:22:29.529 00000090 63 e7 ad f0 24 a0 0f 05 73 53 80 10 b2 0d 7e 85 c...$...sS....~. 00:22:29.529 000000a0 0b 4d 74 85 58 c9 99 04 8b 7c ae af 94 d8 14 46 .Mt.X....|.....F 00:22:29.529 000000b0 ab 39 8b 23 76 1b 4b 34 d0 59 30 63 02 42 1f f0 .9.#v.K4.Y0c.B.. 00:22:29.529 000000c0 0d 00 7d af ea 26 ad 36 74 e1 d7 60 6c 6a 9a 63 ..}..&.6t..`lj.c 00:22:29.529 000000d0 a4 2d 04 22 18 0e 64 40 35 5d c0 0b ae 70 6e 32 .-."..d@5]...pn2 00:22:29.529 000000e0 6c 9b 04 ba a4 64 74 fc 44 a2 fb 4f 3c d8 30 a4 l....dt.D..O<.0. 00:22:29.529 000000f0 ef 95 96 58 b0 5e 68 b2 42 5a b6 77 73 14 64 e2 ...X.^h.BZ.ws.d. 00:22:29.529 00000100 79 76 f3 1c bb 49 fb 62 47 09 4d 21 28 d7 7f 5c yv...I.bG.M!(..\ 00:22:29.529 00000110 9c 7b 8b 74 ac 1e a2 a0 5d e1 9d 74 67 da 9c 49 .{.t....]..tg..I 00:22:29.529 00000120 ee 6e c2 b8 f8 1a 40 80 b6 9d a5 46 7a 02 4e 3b .n....@....Fz.N; 00:22:29.529 00000130 71 cc ce 94 65 c4 b5 7a b3 1a 64 1c 77 ef 84 e9 q...e..z..d.w... 00:22:29.529 00000140 cc b3 79 04 0b 00 b2 f1 96 f5 fe 98 b8 4d 67 85 ..y..........Mg. 00:22:29.529 00000150 98 67 23 3d bc d9 41 df f0 32 e4 2a 9b d6 0f 25 .g#=..A..2.*...% 00:22:29.529 00000160 45 62 94 a1 8c 69 3a e0 db 76 5c 7f d9 df c0 d8 Eb...i:..v\..... 00:22:29.529 00000170 c3 76 86 a3 68 d2 49 9c 1e f4 15 7b 62 ae 4e 04 .v..h.I....{b.N. 00:22:29.529 00000180 68 3f c0 26 02 ef 73 8c 86 ab f5 f2 2c 74 99 79 h?.&..s.....,t.y 00:22:29.529 00000190 5e 54 ff c9 46 1d f0 99 c7 fa 9d b4 40 16 5c 5f ^T..F.......@.\_ 00:22:29.529 000001a0 55 d4 7f 28 5f 13 66 37 c4 53 93 29 21 e5 04 02 U..(_.f7.S.)!... 00:22:29.529 000001b0 42 92 4a a6 f9 b2 46 34 63 6c f4 b3 b4 e5 2d d6 B.J...F4cl....-. 00:22:29.529 000001c0 a0 e4 0c 80 ce 29 e1 4c 35 92 e8 d6 86 fe c8 7f .....).L5....... 00:22:29.529 000001d0 ff d0 58 b7 31 f8 d4 c9 a7 54 16 0c c7 67 37 34 ..X.1....T...g74 00:22:29.529 000001e0 a2 6b d6 d7 4b fe 5e 95 3b 5c 7b 95 00 51 b0 c0 .k..K.^.;\{..Q.. 00:22:29.529 000001f0 2d 42 63 92 c7 bd 71 ec f7 1f 1e e3 3a 07 0c d1 -Bc...q.....:... 00:22:29.529 00000200 5b 9c 56 ad 03 ea 67 ec 27 be 0e 67 6b 4e a8 30 [.V...g.'..gkN.0 00:22:29.529 00000210 44 40 17 52 eb 03 4a 12 3c c6 a8 78 27 9b b4 cc D@.R..J.<..x'... 00:22:29.529 00000220 fc 70 25 3f a9 a4 f2 eb 75 ec 41 e3 6b 82 39 d8 .p%?....u.A.k.9. 00:22:29.529 00000230 0f 52 5b cf 57 52 1b 3d 07 fc ca ac 1e 72 2d 3d .R[.WR.=.....r-= 00:22:29.529 00000240 5e ee 10 cf c8 f9 41 23 a8 b0 c4 91 13 fc 71 e2 ^.....A#......q. 00:22:29.529 00000250 e8 30 50 1d e6 22 c3 e2 a4 4a 0f a4 5f 99 a3 4b .0P.."...J.._..K 00:22:29.529 00000260 df 0b 2f 5c 12 6d dc eb 12 e6 89 c0 7c 83 38 e6 ../\.m......|.8. 00:22:29.529 00000270 70 4d c8 47 e6 f2 5d 28 25 2f 0a c9 0e 8c 61 97 pM.G..](%/....a. 00:22:29.529 00000280 50 0e b0 68 14 02 43 54 19 a5 61 14 8e a9 14 68 P..h..CT..a....h 00:22:29.529 00000290 b6 1b ff 6e e4 d3 5d 52 c4 74 74 2e 3a 67 03 e4 ...n..]R.tt.:g.. 00:22:29.529 000002a0 4f 5e 2b d4 d1 23 6e 08 8a 24 a3 63 75 10 8e f9 O^+..#n..$.cu... 00:22:29.529 000002b0 88 d8 4f f1 a1 cf 7e fa 20 34 e4 3c 6d 96 c0 17 ..O...~. 4.. 00:22:29.529 000003c0 2d 69 ca 4c c7 f2 0f 80 39 8d c3 d4 8e b6 e9 73 -i.L....9......s 00:22:29.529 000003d0 a9 c9 67 84 82 fd c8 51 ba ca d7 0e 68 10 9a 2e ..g....Q....h... 00:22:29.529 000003e0 d9 30 4a 23 bb b7 4d 72 98 9e 67 c1 a1 93 aa 21 .0J#..Mr..g....! 00:22:29.529 000003f0 dd eb 4d fe 0b 97 2e 6b 12 23 f9 64 9f ef ca a0 ..M....k.#.d.... 00:22:29.529 host pubkey: 00:22:29.529 00000000 8c d9 8a 10 b1 b7 74 80 01 f9 c2 23 0d d3 35 ff ......t....#..5. 00:22:29.529 00000010 10 d4 ed 6d 46 a0 fb ff c6 d3 4e 6a 4f 25 ae 73 ...mF.....NjO%.s 00:22:29.529 00000020 24 7c 98 97 fc 41 53 58 a4 d7 3d 16 d8 69 1d c3 $|...ASX..=..i.. 00:22:29.529 00000030 ae 71 5a 45 d3 55 ce 82 93 c1 32 0c 62 d9 4d 25 .qZE.U....2.b.M% 00:22:29.529 00000040 28 4a 67 99 62 e8 c6 1c 73 51 03 89 83 29 df 7b (Jg.b...sQ...).{ 00:22:29.529 00000050 60 af 08 02 a4 3a 51 b7 e0 ed 6d 3b 1c 73 b0 55 `....:Q...m;.s.U 00:22:29.529 00000060 cf c7 bf 49 16 de e4 a7 ed d8 0c 8f fc 54 e6 16 ...I.........T.. 00:22:29.529 00000070 d0 43 f8 c0 b1 de eb 2d ec 43 8d 9b 7b 09 5d 02 .C.....-.C..{.]. 00:22:29.529 00000080 bd 6b 9a 8a 1d 34 55 58 27 96 10 fd 1b e0 6d 1f .k...4UX'.....m. 00:22:29.529 00000090 ab 07 47 70 fd b2 d8 47 55 98 57 ec fb 82 a5 01 ..Gp...GU.W..... 00:22:29.529 000000a0 24 91 34 04 18 8f 79 17 6d 1b 37 37 b7 f3 86 e6 $.4...y.m.77.... 00:22:29.529 000000b0 73 d2 4c 78 9c b5 aa 0e 10 c7 1c 59 ab b8 ec d4 s.Lx.......Y.... 00:22:29.529 000000c0 bf 6d 5c b0 fd 1d b3 b9 b1 77 ab 65 d1 55 f2 e9 .m\......w.e.U.. 00:22:29.529 000000d0 5b 10 7b d9 c5 30 6b eb d1 d3 84 92 e5 96 fa dc [.{..0k......... 00:22:29.529 000000e0 87 98 6b 98 94 03 33 d0 37 6c 5d fa b5 45 9c ab ..k...3.7l]..E.. 00:22:29.529 000000f0 6e 39 d8 5d a0 7d eb f0 d3 ae f4 97 99 c1 27 f6 n9.].}........'. 00:22:29.529 00000100 21 45 61 3a 93 a3 c1 f0 88 55 2a 10 7e 6f 86 cd !Ea:.....U*.~o.. 00:22:29.529 00000110 ac 1f bb 30 9b a6 f5 b2 83 7d fe 0d 74 a3 18 06 ...0.....}..t... 00:22:29.529 00000120 61 23 41 de 51 f3 8d c6 5f 18 4b 68 79 74 00 6d a#A.Q..._.Khyt.m 00:22:29.529 00000130 a6 66 6f c9 c4 21 26 94 40 7f c4 a8 7e 19 f8 eb .fo..!&.@...~... 00:22:29.529 00000140 ae 26 db 0d 49 8c a4 1c 7b a9 95 1a 06 03 8a 81 .&..I...{....... 00:22:29.529 00000150 d5 f1 eb a1 bf 77 ba d0 b6 a7 40 3f 73 aa b6 98 .....w....@?s... 00:22:29.529 00000160 14 e5 55 44 98 fd 98 b9 ae 60 fd 95 3e 4d cc 74 ..UD.....`..>M.t 00:22:29.529 00000170 6f e2 37 9e 2a 26 fb 68 d8 34 58 87 f3 ab f3 f9 o.7.*&.h.4X..... 00:22:29.529 00000180 07 8c ac 5e 05 69 dd 97 30 cd d3 9e 68 24 3c 4c ...^.i..0...h$.M7M..:... 00:22:29.529 000003d0 c8 ba 31 73 8e b2 86 12 fc bd 65 7e e6 5a 50 89 ..1s......e~.ZP. 00:22:29.529 000003e0 40 32 ce 16 0f fb 99 b2 13 39 26 47 70 c1 b0 5e @2.......9&Gp..^ 00:22:29.529 000003f0 af 6d 98 52 0a bb 33 f4 c0 60 40 41 eb 50 28 17 .m.R..3..`@A.P(. 00:22:29.529 dh secret: 00:22:29.529 00000000 51 74 2f 8b 28 96 f0 86 0d 73 21 8b bb 21 2b 31 Qt/.(....s!..!+1 00:22:29.529 00000010 b8 a3 be db f7 cc 3e 82 2a 45 d0 1b 55 42 f8 89 ......>.*E..UB.. 00:22:29.529 00000020 c9 dc bc 8e cd ed ff 71 75 55 49 67 23 9f e9 56 .......quUIg#..V 00:22:29.529 00000030 c5 bf d9 d2 ab 3d 3b 27 55 55 f2 c1 1a 62 9e a2 .....=;'UU...b.. 00:22:29.529 00000040 62 17 6f 97 a6 0a 2c 88 20 87 6c 19 ac d5 4a 2b b.o...,. .l...J+ 00:22:29.529 00000050 be 90 58 8c a1 4a 87 26 69 c4 a3 7c a2 9c 3f 44 ..X..J.&i..|..?D 00:22:29.529 00000060 c9 fd 6d 87 8b f4 1f 67 9d 07 2c ef c4 2c db 9c ..m....g..,..,.. 00:22:29.529 00000070 f1 f5 72 a4 5c e4 bf 63 37 66 43 fc 36 30 40 b3 ..r.\..c7fC.60@. 00:22:29.530 00000080 11 f4 60 65 81 37 c8 e3 bb d2 f1 f8 31 01 2e de ..`e.7......1... 00:22:29.530 00000090 4f 01 b8 48 e4 6b 87 5c 0f 65 37 67 00 f9 78 59 O..H.k.\.e7g..xY 00:22:29.530 000000a0 94 74 4d 91 07 2a d1 df 82 47 f5 b5 5c 8d ff 53 .tM..*...G..\..S 00:22:29.530 000000b0 41 31 c2 4e 17 8b 70 aa 71 ec 21 4e a5 23 54 92 A1.N..p.q.!N.#T. 00:22:29.530 000000c0 11 2f a8 c1 39 ac 17 e6 13 b7 4e 27 0b a0 f8 77 ./..9.....N'...w 00:22:29.530 000000d0 53 c7 64 63 51 3d dc 0b 3f 9a b9 87 10 5f 13 56 S.dcQ=..?...._.V 00:22:29.530 000000e0 87 b5 7d b7 e0 68 c5 af c1 d9 8f 74 56 b5 db 27 ..}..h.....tV..' 00:22:29.530 000000f0 20 bf ea e1 0c 82 ed d0 a0 02 36 6a 4d 81 75 60 .........6jM.u` 00:22:29.530 00000100 7d 8a 9f 64 1d b0 65 eb a6 4d b5 0e c1 72 23 47 }..d..e..M...r#G 00:22:29.530 00000110 0b 4a 32 8e 77 a0 1b fd a3 bd 5a 77 c4 e6 6f 51 .J2.w.....Zw..oQ 00:22:29.530 00000120 af d6 a1 ed cd a8 ae 2c 98 9a 4b ea 7b 1d 1b 3b .......,..K.{..; 00:22:29.530 00000130 f1 11 13 d2 d8 32 01 73 46 17 d2 c4 78 f1 26 8a .....2.sF...x.&. 00:22:29.530 00000140 1a ad 4b 9e 11 d5 36 9c 6e 48 68 f0 f3 ff 71 d3 ..K...6.nHh...q. 00:22:29.530 00000150 9f 5d 0b 9e c2 ab 1e 0a 87 b4 8a 2f fa 50 12 0f .]........./.P.. 00:22:29.530 00000160 1d c2 59 41 a2 88 d2 00 7b 34 ee d3 f2 4d 36 bb ..YA....{4...M6. 00:22:29.530 00000170 bf af be ad 36 ab cc d3 2b 5d b9 70 e7 18 20 7c ....6...+].p.. | 00:22:29.530 00000180 cb 86 2b 00 fd 9d b4 99 8f 3d 6a b5 5d e2 b0 49 ..+......=j.]..I 00:22:29.530 00000190 65 e6 bb 22 3f ec ae 65 e2 a4 1d 46 38 c1 12 d4 e.."?..e...F8... 00:22:29.530 000001a0 f1 a0 b9 dc 01 5e 3a 28 ce a6 23 26 f2 d2 92 97 .....^:(..#&.... 00:22:29.530 000001b0 89 9b 94 f4 59 bd 9d 7a 9e 80 29 72 b7 8e 05 93 ....Y..z..)r.... 00:22:29.530 000001c0 fc 55 b1 75 a4 fb 06 01 31 e3 24 8f 97 3b bc 2f .U.u....1.$..;./ 00:22:29.530 000001d0 1d b1 5e b2 89 d3 0c ef f2 66 c2 48 7f a1 66 c2 ..^......f.H..f. 00:22:29.530 000001e0 b0 6c 9b a5 9a 1a 4a 73 d9 18 22 30 2b 04 bb 99 .l....Js.."0+... 00:22:29.530 000001f0 11 08 68 22 36 57 57 e6 64 ef db 09 03 4c ba 68 ..h"6WW.d....L.h 00:22:29.530 00000200 69 11 4f 52 ee 84 65 84 cb db 57 a3 e6 38 91 cb i.OR..e...W..8.. 00:22:29.530 00000210 10 78 4a 92 51 c2 68 e0 f6 cf 1c 8f 1c 1f c6 65 .xJ.Q.h........e 00:22:29.530 00000220 2e 83 a9 d8 92 f7 06 e8 eb 7e d0 ae 57 ce f4 bf .........~..W... 00:22:29.530 00000230 45 64 9f 9f 46 f7 86 6d 1b 26 0c 5d ee 44 35 31 Ed..F..m.&.].D51 00:22:29.530 00000240 63 52 b2 7f 7c 9c 3e a3 b4 1e 1c 52 c5 9d 16 2c cR..|.>....R..., 00:22:29.530 00000250 c6 83 95 87 c0 a8 56 11 a5 37 51 0e ed 2c f8 e4 ......V..7Q..,.. 00:22:29.530 00000260 fb b2 b3 79 8f 49 fb cd c1 22 b1 2b b8 28 cd 15 ...y.I...".+.(.. 00:22:29.530 00000270 e5 a8 bc 00 49 38 fa e0 1f a0 73 d6 ed fc 3b ea ....I8....s...;. 00:22:29.530 00000280 b1 bd ff 25 cd aa 4d 99 93 a2 a3 33 05 53 bf 1d ...%..M....3.S.. 00:22:29.530 00000290 db 00 7a 63 01 9d 2a 86 e2 0b 97 91 39 7b 63 d6 ..zc..*.....9{c. 00:22:29.530 000002a0 2c 92 46 e4 fa 1b 66 19 86 b3 37 60 b2 95 4d 90 ,.F...f...7`..M. 00:22:29.530 000002b0 11 15 22 d7 c8 31 37 8d d2 e0 e0 77 41 a9 b1 75 .."..17....wA..u 00:22:29.530 000002c0 de e9 d3 ec 08 2a fc 2f 53 95 57 a6 9d 9f 3f 23 .....*./S.W...?# 00:22:29.530 000002d0 46 b9 e3 61 9c cb 93 44 30 14 52 b5 e2 d6 87 6c F..a...D0.R....l 00:22:29.530 000002e0 31 72 67 d8 82 64 5f 59 a7 27 d2 ca d1 f6 c6 df 1rg..d_Y.'...... 00:22:29.530 000002f0 27 8a 38 9f 2c 4c f5 14 0b 8c d2 6c e1 d9 1c 60 '.8.,L.....l...` 00:22:29.530 00000300 75 9a 6e b2 00 41 3e 09 b8 66 07 5a 7c 61 35 7e u.n..A>..f.Z|a5~ 00:22:29.530 00000310 8a fd 30 ed bb 49 86 96 73 21 a8 84 9a e4 cc 13 ..0..I..s!...... 00:22:29.530 00000320 54 37 9e f9 13 1b 26 30 f0 77 f3 c6 f4 de 1a 7f T7....&0.w...... 00:22:29.530 00000330 99 f3 1e 8a f3 f2 49 84 af ef 2d e5 4a 94 8e e2 ......I...-.J... 00:22:29.531 00000340 a3 22 bf 64 e7 a7 09 ee 6f cf a2 a2 21 b5 d0 20 .".d....o...!.. 00:22:29.531 00000350 e9 fe 23 1d 2e e7 26 4a 4f a1 15 c5 68 97 99 1d ..#...&JO...h... 00:22:29.531 00000360 28 f7 94 9c 77 cd df 27 e9 cc df 20 20 b9 e9 1c (...w..'... ... 00:22:29.531 00000370 a2 2b 35 fd 06 84 40 15 de b9 1e d1 e6 66 f6 c2 .+5...@......f.. 00:22:29.531 00000380 a4 ba 86 97 bb 6d 45 6e d5 08 75 f2 52 bd 96 29 .....mEn..u.R..) 00:22:29.531 00000390 b8 38 9c 44 82 d0 8d f8 ae 24 13 b4 8e e5 cf c1 .8.D.....$...... 00:22:29.531 000003a0 f2 fe de 21 a3 39 8b e8 84 42 46 1c 20 45 52 7e ...!.9...BF. ER~ 00:22:29.531 000003b0 f8 ec 27 6e c2 35 7a 02 20 19 7f f4 46 ed 88 8c ..'n.5z. ...F... 00:22:29.531 000003c0 d9 d2 85 e4 e1 b0 69 14 26 c7 4e a3 bc 96 09 b3 ......i.&.N..... 00:22:29.531 000003d0 be 24 24 4a 10 e2 ee 5b 12 bd 0c 27 78 d1 b3 fa .$$J...[...'x... 00:22:29.531 000003e0 85 b8 32 28 4a 6c 5c f4 96 92 ec 7a 47 3e eb 31 ..2(Jl\....zG>.1 00:22:29.531 000003f0 c0 03 9f a1 00 f2 71 5c b3 aa be b5 55 d1 64 cb ......q\....U.d. 00:22:29.531 [2024-09-27 13:27:21.221245] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key4, hash=2, dhgroup=5, seq=3775755272, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=48 00:22:29.531 [2024-09-27 13:27:21.221636] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.531 [2024-09-27 13:27:21.306935] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.531 [2024-09-27 13:27:21.307512] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.531 [2024-09-27 13:27:21.307740] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.531 [2024-09-27 13:27:21.406651] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.531 [2024-09-27 13:27:21.406839] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.531 [2024-09-27 13:27:21.407025] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.531 [2024-09-27 13:27:21.407196] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.531 [2024-09-27 13:27:21.407436] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.531 ctrlr pubkey: 00:22:29.531 00000000 0b f0 2e 7c 50 14 96 ce be 5f 83 8e 90 f8 34 9a ...|P...._....4. 00:22:29.531 00000010 85 c8 8d 41 08 f2 c7 ff dd 6f 16 88 f1 bc 61 82 ...A.....o....a. 00:22:29.531 00000020 cc b6 46 09 1b c6 4a 25 72 01 24 c5 b7 28 06 c0 ..F...J%r.$..(.. 00:22:29.531 00000030 b7 d6 a7 16 8d 44 5a 5e 0a e2 3a f9 c1 78 9c 9d .....DZ^..:..x.. 00:22:29.531 00000040 91 48 57 ec 80 8b d4 70 ab f5 2c 3a 8e 9d 93 81 .HW....p..,:.... 00:22:29.531 00000050 0e 88 4a 9f 5f e5 14 b8 8b 1c 44 d5 cd 2f 44 42 ..J._.....D../DB 00:22:29.531 00000060 29 a8 83 47 90 87 b5 6b 45 12 10 3c 4e d0 f4 ca )..G...kE.....(/.p.PB>.... 00:22:29.531 dh secret: 00:22:29.531 00000000 d6 e8 f6 2e 5e 48 2d 85 54 f1 2a 93 aa 44 3e 32 ....^H-.T.*..D>2 00:22:29.531 00000010 8e d6 c8 f2 1c 5f fb 78 f5 02 5f b9 1b a8 66 1a ....._.x.._...f. 00:22:29.531 00000020 09 49 1f c9 7e a2 fb 30 46 35 80 a3 8e 72 e6 3e .I..~..0F5...r.> 00:22:29.531 00000030 d9 4f 88 f2 ea f2 f3 5d 46 ce 81 15 6e f8 c5 8a .O.....]F...n... 00:22:29.531 00000040 47 74 53 92 ae 1c af bc 71 75 16 9f 34 af e5 fd GtS.....qu..4... 00:22:29.531 00000050 51 aa 83 9e 42 5e d9 5f f1 9e 7b d2 95 b1 e6 81 Q...B^._..{..... 00:22:29.531 00000060 eb 49 cc a3 3e 07 2b 93 3d 7b 6e 0b 4e 15 0c 03 .I..>.+.={n.N... 00:22:29.531 00000070 3a 27 83 a0 73 53 68 52 ad f4 58 5b 68 a2 ce 6d :'..sShR..X[h..m 00:22:29.531 00000080 28 80 ba ad 6c 5c f8 aa fa 2a dd 51 9e 95 89 dc (...l\...*.Q.... 00:22:29.531 00000090 a8 a9 c6 18 18 61 e1 4a c1 e6 2e a2 a1 66 22 2b .....a.J.....f"+ 00:22:29.531 000000a0 d7 01 34 e9 20 f6 b7 36 bc 9c 21 de 3e 84 ed 13 ..4. ..6..!.>... 00:22:29.531 000000b0 0b 60 72 57 bf c3 7a 98 da 36 fc 07 d1 f8 cd 0e .`rW..z..6...... 00:22:29.531 000000c0 e9 7c 1c 1b 9c e3 74 3a 96 83 3b 9c 10 5a 81 e1 .|....t:..;..Z.. 00:22:29.531 000000d0 64 1f 01 38 47 ad 8b 9b 54 7d 05 d5 76 7b ab 6d d..8G...T}..v{.m 00:22:29.531 000000e0 e8 a0 19 ea 08 1f ae 28 fb 8f 2e 93 db f4 2e 80 .......(........ 00:22:29.531 000000f0 31 e8 21 5a bc 7d b6 c1 8d 79 e0 51 0c a9 eb dd 1.!Z.}...y.Q.... 00:22:29.531 [2024-09-27 13:27:21.414161] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key0, hash=3, dhgroup=1, seq=3775755273, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.531 [2024-09-27 13:27:21.414748] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.531 [2024-09-27 13:27:21.418654] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.531 [2024-09-27 13:27:21.419023] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.531 [2024-09-27 13:27:21.419221] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.531 [2024-09-27 13:27:21.419412] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.531 [2024-09-27 13:27:21.471177] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.531 [2024-09-27 13:27:21.471490] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.531 [2024-09-27 13:27:21.471699] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 1 (ffdhe2048) 00:22:29.531 [2024-09-27 13:27:21.471863] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.531 [2024-09-27 13:27:21.472110] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.531 ctrlr pubkey: 00:22:29.531 00000000 0b f0 2e 7c 50 14 96 ce be 5f 83 8e 90 f8 34 9a ...|P...._....4. 00:22:29.531 00000010 85 c8 8d 41 08 f2 c7 ff dd 6f 16 88 f1 bc 61 82 ...A.....o....a. 00:22:29.531 00000020 cc b6 46 09 1b c6 4a 25 72 01 24 c5 b7 28 06 c0 ..F...J%r.$..(.. 00:22:29.531 00000030 b7 d6 a7 16 8d 44 5a 5e 0a e2 3a f9 c1 78 9c 9d .....DZ^..:..x.. 00:22:29.531 00000040 91 48 57 ec 80 8b d4 70 ab f5 2c 3a 8e 9d 93 81 .HW....p..,:.... 00:22:29.531 00000050 0e 88 4a 9f 5f e5 14 b8 8b 1c 44 d5 cd 2f 44 42 ..J._.....D../DB 00:22:29.531 00000060 29 a8 83 47 90 87 b5 6b 45 12 10 3c 4e d0 f4 ca )..G...kE..3Yc.... 00:22:29.531 00000060 3c 20 34 f0 54 54 a7 dc 1f 74 5e d0 7b 3a 20 a5 < 4.TT...t^.{: . 00:22:29.531 00000070 72 85 3b 3a d4 31 3a 02 24 8b 1d 3d 1c 89 b2 78 r.;:.1:.$..=...x 00:22:29.531 00000080 30 67 65 4c 13 59 17 34 b8 43 2b 69 34 c4 74 c8 0geL.Y.4.C+i4.t. 00:22:29.531 00000090 e5 7f 84 b2 83 ca a4 88 a8 a7 45 76 e3 fd f3 28 ..........Ev...( 00:22:29.531 000000a0 95 0b 94 6a a0 f8 de a5 a3 ff c7 02 cb c4 ea 3a ...j...........: 00:22:29.531 000000b0 77 14 88 0c 55 bb 6e 08 b4 15 9f 71 d9 2f a3 7f w...U.n....q./.. 00:22:29.531 000000c0 d2 85 8c 99 d6 79 b7 9a 3b 28 bd 26 a4 82 1a 28 .....y..;(.&...( 00:22:29.531 000000d0 77 2a f2 ce 07 50 b9 c2 5f 93 0f e7 9f f7 ba c2 w*...P.._....... 00:22:29.531 000000e0 b6 c8 3b 3b 12 4d 44 3f ee 60 8e 55 7c 30 69 57 ..;;.MD?.`.U|0iW 00:22:29.531 000000f0 8d 2f 12 68 b1 33 df e7 1a cc 0e 8e 5d 78 a2 9c ./.h.3......]x.. 00:22:29.531 dh secret: 00:22:29.531 00000000 d8 bd 37 4a 9a 1f 25 e8 f9 51 04 d0 ee 24 e1 aa ..7J..%..Q...$.. 00:22:29.531 00000010 ad 39 07 ec 53 c0 53 9c f3 b9 2e 14 22 0c 05 42 .9..S.S....."..B 00:22:29.531 00000020 46 c4 f4 40 39 4c e6 50 7c 57 b3 12 bd 5a 0a 40 F..@9L.P|W...Z.@ 00:22:29.531 00000030 40 b3 c8 75 e8 bd c7 7f 34 97 75 74 57 d1 42 8b @..u....4.utW.B. 00:22:29.531 00000040 21 0a f8 90 03 88 13 33 f1 fe a8 4c 2d 9f fa 96 !......3...L-... 00:22:29.531 00000050 97 2a 41 e2 66 47 a3 94 02 a4 95 38 c3 e4 00 66 .*A.fG.....8...f 00:22:29.531 00000060 16 d4 9d d3 37 a7 cb 88 97 26 73 13 2c c3 cd be ....7....&s.,... 00:22:29.531 00000070 c9 42 07 c1 58 40 0c cd 19 20 23 8a d9 c8 1c 9b .B..X@... #..... 00:22:29.531 00000080 1d 5a 20 59 98 02 93 9c 43 73 2e e1 e2 18 a9 2b .Z Y....Cs.....+ 00:22:29.531 00000090 0c 34 03 72 dc 73 13 29 c9 74 ca c2 ac 33 90 3c .4.r.s.).t...3.< 00:22:29.531 000000a0 80 e0 f7 fe b6 37 cd 67 51 ad f5 df 2c 42 9b 51 .....7.gQ...,B.Q 00:22:29.531 000000b0 20 7d dc d3 e1 4f a7 e8 b7 93 0b 3f cd d0 c0 35 }...O.....?...5 00:22:29.531 000000c0 ee 29 76 b9 28 fc 20 3f 32 33 d0 9a 11 cb ae c0 .)v.(. ?23...... 00:22:29.531 000000d0 a1 72 c4 f0 42 65 61 d4 1b 9d 9d 5e 51 2d 80 48 .r..Bea....^Q-.H 00:22:29.531 000000e0 d0 ec 35 11 64 55 b9 c6 f4 5b 9a d7 4b 4f f3 d8 ..5.dU...[..KO.. 00:22:29.531 000000f0 4d 4d 87 05 e7 d0 37 4f b7 f8 30 7c 76 1c 4e a6 MM....7O..0|v.N. 00:22:29.531 [2024-09-27 13:27:21.479883] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key0, hash=3, dhgroup=1, seq=3775755274, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.531 [2024-09-27 13:27:21.480223] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.531 [2024-09-27 13:27:21.484242] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.531 [2024-09-27 13:27:21.484760] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.531 [2024-09-27 13:27:21.485001] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.532 [2024-09-27 13:27:21.485310] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.532 [2024-09-27 13:27:21.579931] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.532 [2024-09-27 13:27:21.580281] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.532 [2024-09-27 13:27:21.580513] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.532 [2024-09-27 13:27:21.580797] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.532 [2024-09-27 13:27:21.581059] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.532 ctrlr pubkey: 00:22:29.532 00000000 36 cf 0c e9 37 4d 17 2c f8 ba e5 06 5c dd 7f 45 6...7M.,....\..E 00:22:29.532 00000010 74 18 ff 0a 8d ca ed 95 ad 5b 8e f5 81 b7 69 fa t........[....i. 00:22:29.532 00000020 a6 d5 aa 30 3e 5b 89 a9 42 da 2f 11 91 a5 d3 2f ...0>[..B./..../ 00:22:29.532 00000030 a3 f7 f3 ad d0 10 98 c7 47 90 14 c4 7e a9 6f e8 ........G...~.o. 00:22:29.532 00000040 2e e0 8a 8d 63 5f f2 87 17 70 cf 05 02 0d 69 4b ....c_...p....iK 00:22:29.532 00000050 f8 44 e4 31 3d 4e 7a a1 ca 9a a3 7f b2 5e f7 d8 .D.1=Nz......^.. 00:22:29.532 00000060 c4 07 ef e9 73 cb 78 38 99 b2 65 84 50 5b 9f 05 ....s.x8..e.P[.. 00:22:29.532 00000070 b1 2b d3 30 8e df 17 1d 90 34 24 ef 85 ee a3 ae .+.0.....4$..... 00:22:29.532 00000080 ff f3 ea 2c 8b 80 20 54 a3 23 ef db a3 3b 85 f4 ...,.. T.#...;.. 00:22:29.532 00000090 8b 3c fb cb b7 3e d6 16 28 8b b6 3a 2d a3 4e 4d .<...>..(..:-.NM 00:22:29.532 000000a0 4a 50 2d e6 6a 8f c0 c2 cf 7f bb 33 8f 9e 5c 0c JP-.j......3..\. 00:22:29.532 000000b0 54 c4 9c 9a a3 97 d4 00 e8 4c 3e 8c 76 0b 36 09 T........L>.v.6. 00:22:29.532 000000c0 25 8b 87 33 67 d8 4f 4b 62 d8 83 3f a1 88 54 cf %..3g.OKb..?..T. 00:22:29.532 000000d0 0c 1c 21 7f 04 3c 22 1a 7f 42 80 c8 86 52 e1 fa ..!..<"..B...R.. 00:22:29.532 000000e0 00 34 fa b9 78 87 a3 58 97 51 0d fa 26 f1 ae b5 .4..x..X.Q..&... 00:22:29.532 000000f0 91 9f 5c ae 0c 09 61 75 45 2b 28 ac 88 19 e3 e3 ..\...auE+(..... 00:22:29.532 host pubkey: 00:22:29.532 00000000 30 76 f3 e8 93 a8 12 1f 57 f4 98 80 cb 0b 35 88 0v......W.....5. 00:22:29.532 00000010 02 ea 87 d6 2e 9a 64 eb b9 ff 60 0e 9f 93 12 29 ......d...`....) 00:22:29.532 00000020 00 ae 4b c3 e5 30 14 3f 20 bb ef 8d f8 72 d5 68 ..K..0.? ....r.h 00:22:29.532 00000030 d9 5f 75 f8 8e 36 c4 cb 79 50 ba 69 bd 8f f6 13 ._u..6..yP.i.... 00:22:29.532 00000040 cc ee d4 8e 87 72 08 bc 2f f2 85 7b 91 8c 77 32 .....r../..{..w2 00:22:29.532 00000050 68 5f c7 f8 2b 80 61 fb 68 5c d2 ff d2 29 00 09 h_..+.a.h\...).. 00:22:29.532 00000060 48 60 56 23 3c 14 9c 36 1c 31 89 ad 48 2c 9a 16 H`V#<..6.1..H,.. 00:22:29.532 00000070 31 e3 da 4b 64 83 95 ee b6 0a ed 12 7a 3c dd 0d 1..Kd.......z<.. 00:22:29.532 00000080 6a d6 dc 13 d1 1c 63 84 76 bb 18 f5 3f 62 d9 85 j.....c.v...?b.. 00:22:29.532 00000090 c2 79 91 34 4b bc 88 d9 2d 3c 7d 87 27 50 aa df .y.4K...-<}.'P.. 00:22:29.532 000000a0 bf 1e 93 66 3b 3c 00 d3 43 43 2e 2a 4e fe 83 d5 ...f;<..CC.*N... 00:22:29.532 000000b0 9a 56 90 98 0e 45 23 29 e5 a6 5c a7 be ad bc 5d .V...E#)..\....] 00:22:29.532 000000c0 6a 65 fa ab 95 be 92 26 ae a7 65 d3 64 21 d7 01 je.....&..e.d!.. 00:22:29.532 000000d0 62 64 4a af c2 2c 13 f1 7d d7 35 b7 0f cc 87 d6 bdJ..,..}.5..... 00:22:29.532 000000e0 35 6f 40 97 fd 66 c1 4a aa 47 fe 3c d2 e1 fb 39 5o@..f.J.G.<...9 00:22:29.532 000000f0 58 46 cd 32 17 d9 4b fa 95 fe 44 48 b1 4a cb 78 XF.2..K...DH.J.x 00:22:29.532 dh secret: 00:22:29.532 00000000 7d 65 ba 71 48 70 0d 50 fd f4 15 09 98 3d 1f fb }e.qHp.P.....=.. 00:22:29.532 00000010 7e 25 f9 b2 b3 2f 77 ea df 4d df 46 d6 d9 bb b3 ~%.../w..M.F.... 00:22:29.532 00000020 7b d0 80 51 3a 1c b1 05 f2 f4 88 6a 8a 4d 63 1c {..Q:......j.Mc. 00:22:29.532 00000030 bf 7a 56 e4 47 fb 11 94 8c 0a 52 ce c6 63 b8 27 .zV.G.....R..c.' 00:22:29.532 00000040 49 07 16 a0 27 da 3b e0 11 bc 97 e7 b6 64 67 d8 I...'.;......dg. 00:22:29.532 00000050 fd 3e 20 3b 37 e9 38 3f 61 12 31 0c 84 03 54 72 .> ;7.8?a.1...Tr 00:22:29.532 00000060 3c 1e d2 4e 87 b8 31 be bd 0b 1d ae 8d 93 44 89 <..N..1.......D. 00:22:29.532 00000070 3b 02 b5 d7 37 92 eb 3c 67 2e b5 03 ae 43 61 96 ;...7..t~5 00:22:29.532 00000090 be 0f 61 88 ac a4 be 27 c2 bb bc fc 0e 30 91 ce ..a....'.....0.. 00:22:29.532 000000a0 24 c6 ed 70 2c b7 ce c7 34 3d 64 69 a6 d9 b9 e2 $..p,...4=di.... 00:22:29.532 000000b0 62 76 6f 98 93 1b 79 8d fb e1 32 08 1f 76 8a a6 bvo...y...2..v.. 00:22:29.532 000000c0 8a 0b a0 a2 d0 bd 47 cb 11 ae b6 cd c2 2d e0 07 ......G......-.. 00:22:29.532 000000d0 58 be 15 ed 0e 41 24 bc c8 05 88 3b 06 c2 79 2e X....A$....;..y. 00:22:29.532 000000e0 72 86 91 26 7d ab b3 53 14 2d e1 1d b2 02 7b f8 r..&}..S.-....{. 00:22:29.532 000000f0 b3 38 09 0f 5a c3 6a 70 a4 c5 3b 79 d2 f6 e1 34 .8..Z.jp..;y...4 00:22:29.532 [2024-09-27 13:27:21.589576] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=3, dhgroup=1, seq=3775755275, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.532 [2024-09-27 13:27:21.589973] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.532 [2024-09-27 13:27:21.593939] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.532 [2024-09-27 13:27:21.594304] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.532 [2024-09-27 13:27:21.594646] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.532 [2024-09-27 13:27:21.594877] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.532 [2024-09-27 13:27:21.646657] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.532 [2024-09-27 13:27:21.646912] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.532 [2024-09-27 13:27:21.647120] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 1 (ffdhe2048) 00:22:29.532 [2024-09-27 13:27:21.647314] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.532 [2024-09-27 13:27:21.647602] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.532 ctrlr pubkey: 00:22:29.532 00000000 36 cf 0c e9 37 4d 17 2c f8 ba e5 06 5c dd 7f 45 6...7M.,....\..E 00:22:29.532 00000010 74 18 ff 0a 8d ca ed 95 ad 5b 8e f5 81 b7 69 fa t........[....i. 00:22:29.532 00000020 a6 d5 aa 30 3e 5b 89 a9 42 da 2f 11 91 a5 d3 2f ...0>[..B./..../ 00:22:29.532 00000030 a3 f7 f3 ad d0 10 98 c7 47 90 14 c4 7e a9 6f e8 ........G...~.o. 00:22:29.532 00000040 2e e0 8a 8d 63 5f f2 87 17 70 cf 05 02 0d 69 4b ....c_...p....iK 00:22:29.532 00000050 f8 44 e4 31 3d 4e 7a a1 ca 9a a3 7f b2 5e f7 d8 .D.1=Nz......^.. 00:22:29.532 00000060 c4 07 ef e9 73 cb 78 38 99 b2 65 84 50 5b 9f 05 ....s.x8..e.P[.. 00:22:29.532 00000070 b1 2b d3 30 8e df 17 1d 90 34 24 ef 85 ee a3 ae .+.0.....4$..... 00:22:29.532 00000080 ff f3 ea 2c 8b 80 20 54 a3 23 ef db a3 3b 85 f4 ...,.. T.#...;.. 00:22:29.532 00000090 8b 3c fb cb b7 3e d6 16 28 8b b6 3a 2d a3 4e 4d .<...>..(..:-.NM 00:22:29.532 000000a0 4a 50 2d e6 6a 8f c0 c2 cf 7f bb 33 8f 9e 5c 0c JP-.j......3..\. 00:22:29.532 000000b0 54 c4 9c 9a a3 97 d4 00 e8 4c 3e 8c 76 0b 36 09 T........L>.v.6. 00:22:29.532 000000c0 25 8b 87 33 67 d8 4f 4b 62 d8 83 3f a1 88 54 cf %..3g.OKb..?..T. 00:22:29.532 000000d0 0c 1c 21 7f 04 3c 22 1a 7f 42 80 c8 86 52 e1 fa ..!..<"..B...R.. 00:22:29.532 000000e0 00 34 fa b9 78 87 a3 58 97 51 0d fa 26 f1 ae b5 .4..x..X.Q..&... 00:22:29.532 000000f0 91 9f 5c ae 0c 09 61 75 45 2b 28 ac 88 19 e3 e3 ..\...auE+(..... 00:22:29.532 host pubkey: 00:22:29.532 00000000 68 40 5f ad bc 34 0e b1 a8 4a 6d 37 38 be d4 60 h@_..4...Jm78..` 00:22:29.532 00000010 7a 01 9d 20 c2 f9 e6 bf b9 71 8d ca 25 b4 8e d9 z.. .....q..%... 00:22:29.532 00000020 21 be 09 9d e1 f4 5d 9c fa ec 3c a5 4f 5e a7 b2 !.....]...<.O^.. 00:22:29.532 00000030 f8 91 fe 8e 7a 5f 33 ce b4 15 8d 85 56 43 68 8c ....z_3.....VCh. 00:22:29.532 00000040 77 07 9c 22 90 20 88 66 0b 40 2d 74 6c 67 d2 59 w..". .f.@-tlg.Y 00:22:29.532 00000050 31 ce 9d e2 08 7c 88 7d c2 4e e1 a7 21 d6 9e 6d 1....|.}.N..!..m 00:22:29.532 00000060 bf 78 6b 06 02 79 23 18 e4 6f fe bb ff 21 29 7e .xk..y#..o...!)~ 00:22:29.532 00000070 ce 67 41 61 f3 a4 5c 17 6c a7 b6 ce f5 77 e0 8f .gAa..\.l....w.. 00:22:29.532 00000080 22 0a 5e 53 26 01 02 07 1c 89 27 6c 03 b4 ff 82 ".^S&.....'l.... 00:22:29.532 00000090 ae e1 4f 74 9d 09 84 15 6a 44 51 52 7f 20 a1 6e ..Ot....jDQR. .n 00:22:29.532 000000a0 b4 b1 d3 61 3f 42 38 7e 3f d5 10 59 eb 34 d7 65 ...a?B8~?..Y.4.e 00:22:29.532 000000b0 93 81 fe 47 88 0e 0b ca ee 60 92 2d 73 e6 0a 6f ...G.....`.-s..o 00:22:29.532 000000c0 7f ae 0e 80 ea 5e d9 87 19 f2 65 d6 4b a4 b6 85 .....^....e.K... 00:22:29.532 000000d0 47 b8 7f db cd ba 8b 94 d5 9e 52 08 52 b4 b2 5f G.........R.R.._ 00:22:29.532 000000e0 c1 39 e3 fe d7 b5 22 52 d1 c7 6e 90 07 e0 5a f3 .9...."R..n...Z. 00:22:29.532 000000f0 3a aa 0a 27 b0 d0 bf 17 7f 30 24 7e f2 2f e2 98 :..'.....0$~./.. 00:22:29.532 dh secret: 00:22:29.532 00000000 a2 b4 29 87 db 07 5f 8e d3 4c b0 6d f9 a8 48 f3 ..)..._..L.m..H. 00:22:29.532 00000010 5b 96 b2 4a 55 a2 f3 c0 a4 9a 20 27 38 1b ce 22 [..JU..... '8.." 00:22:29.532 00000020 1c ec 8f 64 56 5d 76 d9 35 86 eb e5 ff 1d 91 07 ...dV]v.5....... 00:22:29.532 00000030 57 fe 8a 78 7d a1 0a e7 d0 f0 ae 26 37 c8 3a 5f W..x}......&7.:_ 00:22:29.532 00000040 4a 31 04 0e 40 f1 8e f8 34 5a c4 a7 d9 b3 ef 67 J1..@...4Z.....g 00:22:29.532 00000050 5f b7 a2 b3 81 ec c4 6d 7a 6d 36 08 99 15 d0 b0 _......mzm6..... 00:22:29.532 00000060 18 d7 ae ef a4 b2 94 63 26 86 28 ae 9a d0 63 72 .......c&.(...cr 00:22:29.532 00000070 0b 55 76 1d ab 24 25 81 08 4f 28 65 f9 0d ec 5e .Uv..$%..O(e...^ 00:22:29.532 00000080 69 eb be 42 de 2e ba cd 12 e5 18 d9 30 02 71 59 i..B........0.qY 00:22:29.532 00000090 7b 42 e0 dc 85 51 62 95 5f 01 25 e1 3f b9 f5 5e {B...Qb._.%.?..^ 00:22:29.532 000000a0 45 c0 cb 95 34 22 ca 70 16 4d 4e 55 bd 40 66 dc E...4".p.MNU.@f. 00:22:29.532 000000b0 95 ac 55 b7 5d 72 13 ea 69 4f cc 3c 79 e3 db b2 ..U.]r..iO.....v\9p..Xb.. 00:22:29.533 00000060 a9 22 f5 db a7 2a 91 4e dd 31 74 5e 79 53 36 ed ."...*.N.1t^yS6. 00:22:29.533 00000070 33 ca f4 8b 95 d6 e4 86 2d 1e db a7 53 7e fc 5c 3.......-...S~.\ 00:22:29.533 00000080 a8 92 40 e5 28 06 e7 5a f2 7b dd 5f 5a 1e f0 56 ..@.(..Z.{._Z..V 00:22:29.533 00000090 42 55 c3 fd cc 9b 1f d4 62 b5 49 63 cf f4 4f ce BU......b.Ic..O. 00:22:29.533 000000a0 3a de 96 53 14 01 69 06 01 5a 13 2a 43 01 54 84 :..S..i..Z.*C.T. 00:22:29.533 000000b0 19 7f 6c 15 06 dd 69 4c 52 73 c0 0c 62 b2 99 0a ..l...iLRs..b... 00:22:29.533 000000c0 41 2d 13 e0 77 06 6e e3 33 19 27 33 ac 4a e5 ba A-..w.n.3.'3.J.. 00:22:29.533 000000d0 ad 11 64 70 e7 5a 6d c2 61 f4 f1 ab c5 a2 4b 5f ..dp.Zm.a.....K_ 00:22:29.533 000000e0 1e fc 55 55 0d f7 2e 0f 6f e6 98 49 2a 6a 1f 60 ..UU....o..I*j.` 00:22:29.533 000000f0 d1 79 74 d1 a6 12 21 d7 58 c4 a4 b8 b8 aa e5 c1 .yt...!.X....... 00:22:29.533 dh secret: 00:22:29.533 00000000 4d 33 6c 19 4c a2 7b cd 17 a5 b9 de be 55 38 e8 M3l.L.{......U8. 00:22:29.533 00000010 0a 69 74 c6 c5 f6 1b c3 4a 3a af b9 b1 ed e2 9d .it.....J:...... 00:22:29.533 00000020 92 e3 e8 52 eb fd 85 b1 95 a5 f7 b3 43 e5 de bd ...R........C... 00:22:29.533 00000030 5f 4b 60 b2 d6 d7 14 28 0d 80 47 ca 7e 9a 32 87 _K`....(..G.~.2. 00:22:29.533 00000040 f3 c9 88 92 1f d8 4f f6 76 38 d5 90 69 62 b2 b3 ......O.v8..ib.. 00:22:29.533 00000050 5c cf 7b 26 51 a8 d7 ae a6 f6 1d 3f 66 1f 5f e5 \.{&Q......?f._. 00:22:29.533 00000060 30 b5 ef ca 4f 83 0b 40 58 f0 99 7e 3f de 32 8e 0...O..@X..~?.2. 00:22:29.533 00000070 74 2b 57 db 87 d8 cf e3 47 99 c4 07 6d f5 66 03 t+W.....G...m.f. 00:22:29.533 00000080 54 78 ce d8 a6 69 37 e0 55 7f 58 8f 10 a4 f5 70 Tx...i7.U.X....p 00:22:29.533 00000090 cf 39 7e fd 44 72 86 36 80 dd 1a ca 41 89 70 7c .9~.Dr.6....A.p| 00:22:29.533 000000a0 3f 32 90 72 06 00 9e 67 97 99 b6 f2 8d 1c 8b bd ?2.r...g........ 00:22:29.533 000000b0 3e da 8a de d2 b0 b2 8a 21 9e 07 52 62 00 bd 3f >.......!..Rb..? 00:22:29.533 000000c0 11 7e 11 f4 98 bc eb 9f 10 7b ee c9 46 c1 7a e3 .~.......{..F.z. 00:22:29.533 000000d0 9e fb 35 f8 34 86 8b dc ec c5 38 e7 3a 39 63 09 ..5.4.....8.:9c. 00:22:29.533 000000e0 06 2f e9 53 65 7c 5b 9a 32 e2 a7 94 37 c0 c5 7e ./.Se|[.2...7..~ 00:22:29.533 000000f0 11 b5 44 c5 0c 03 09 df ad 4d 0e d3 31 fb 32 a2 ..D......M..1.2. 00:22:29.533 [2024-09-27 13:27:21.768169] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=3, dhgroup=1, seq=3775755277, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.533 [2024-09-27 13:27:21.768547] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.533 [2024-09-27 13:27:21.772982] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.533 [2024-09-27 13:27:21.773421] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.533 [2024-09-27 13:27:21.773704] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.533 [2024-09-27 13:27:21.774015] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.533 [2024-09-27 13:27:21.825824] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.533 [2024-09-27 13:27:21.826034] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.533 [2024-09-27 13:27:21.826221] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 1 (ffdhe2048) 00:22:29.533 [2024-09-27 13:27:21.826378] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.533 [2024-09-27 13:27:21.826794] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.533 ctrlr pubkey: 00:22:29.533 00000000 f0 15 98 58 01 ea 3d 76 5c 7c 1f 1e f4 f3 bd 6d ...X..=v\|.....m 00:22:29.533 00000010 48 36 29 91 ea 94 e8 d7 6c 6d e6 0f 76 1a 1e 55 H6).....lm..v..U 00:22:29.533 00000020 cb 11 ad 92 50 38 bd 55 b6 89 7a cf 0c 34 21 72 ....P8.U..z..4!r 00:22:29.533 00000030 7a 31 de 55 fa f2 e2 d0 a7 fd 1f 73 4f de 47 10 z1.U.......sO.G. 00:22:29.533 00000040 9b b8 39 8e f3 0e b7 af 5d dd c9 21 9a 73 1b dd ..9.....]..!.s.. 00:22:29.533 00000050 9f 6a 58 63 10 8f f6 47 2d 93 e1 96 1f 32 56 fb .jXc...G-....2V. 00:22:29.533 00000060 c1 e8 e4 22 ef b8 26 37 2c 17 0c 2d bf 91 1d 69 ..."..&7,..-...i 00:22:29.533 00000070 6d 75 e1 3b f0 20 a7 7f b1 4d d0 37 48 8b 93 f4 mu.;. ...M.7H... 00:22:29.533 00000080 b5 31 96 43 2e 12 37 f1 bf 58 66 ca 49 cf 7e 0a .1.C..7..Xf.I.~. 00:22:29.533 00000090 cd 6d be 59 e6 84 2e 16 3b 56 7d 85 1d c0 42 90 .m.Y....;V}...B. 00:22:29.533 000000a0 31 60 6f 38 28 10 1e 6d aa 70 57 27 bd 98 35 a5 1`o8(..m.pW'..5. 00:22:29.533 000000b0 a1 87 77 0a 4b 1d 86 e0 1d ee b0 fc b4 ee 7c d4 ..w.K.........|. 00:22:29.533 000000c0 34 65 55 1b d5 43 05 3f 64 f2 74 24 44 69 7c 08 4eU..C.?d.t$Di|. 00:22:29.533 000000d0 e5 09 a0 e5 ba 1e e8 bd 9f 80 14 2b ad d6 d4 aa ...........+.... 00:22:29.533 000000e0 1d b6 a4 af af 35 b4 32 8a 97 f7 14 67 73 7a 16 .....5.2....gsz. 00:22:29.533 000000f0 ea 99 3b 40 d7 ee cd 85 46 be a3 a3 e3 ca f8 31 ..;@....F......1 00:22:29.533 host pubkey: 00:22:29.533 00000000 c7 cb 43 5a f8 b6 0e cb e6 6d aa 6b cb f8 e6 bd ..CZ.....m.k.... 00:22:29.533 00000010 3b bc de 90 e6 7c a4 72 ed 96 9a 17 c5 4c 60 0c ;....|.r.....L`. 00:22:29.533 00000020 17 d6 d9 75 f7 03 65 8c 45 2a 06 ff 90 69 1e 18 ...u..e.E*...i.. 00:22:29.533 00000030 24 20 9c 77 52 a8 7d 66 9a 68 92 f3 9e aa a5 e5 $ .wR.}f.h...... 00:22:29.533 00000040 ec db ed 4a 2e 1c c7 df d7 41 a5 97 84 1c 7e 53 ...J.....A....~S 00:22:29.533 00000050 c3 06 1d c7 57 4c c2 df 54 4b d1 28 c5 d1 aa cd ....WL..TK.(.... 00:22:29.533 00000060 b1 b7 c6 53 06 f1 e3 29 77 c2 4b 6b e8 0a 06 f5 ...S...)w.Kk.... 00:22:29.533 00000070 ab 0b 4c ff 5d d9 f6 1e 1b 07 dd fd 22 94 38 39 ..L.].......".89 00:22:29.533 00000080 81 0a 4c 1f 20 91 ba f9 6e 11 24 5e 63 42 5a a7 ..L. ...n.$^cBZ. 00:22:29.533 00000090 4d d5 50 8b eb 9f 28 9c ee d2 f5 46 70 99 ba 34 M.P...(....Fp..4 00:22:29.533 000000a0 fa bf b0 0c 5e fe 6b 0a 8a b7 c5 fe b1 2e 91 a3 ....^.k......... 00:22:29.533 000000b0 0d fe f5 64 bb 9b dd 39 13 b4 4b eb 7b a7 7c 6a ...d...9..K.{.|j 00:22:29.533 000000c0 74 ae 3b 9d 22 b3 85 74 44 9e fb 33 dd 51 ed 7e t.;."..tD..3.Q.~ 00:22:29.533 000000d0 39 4d 8f 1f f1 4d d4 1c ea 15 ea c2 bf 14 09 9d 9M...M.......... 00:22:29.533 000000e0 3c 5d 27 ed 52 4c 2c 77 97 c1 13 8b 56 93 30 2a <]'.RL,w....V.0* 00:22:29.533 000000f0 49 9c fe 43 c2 1b 56 0f e6 bc 2e 5d 33 af c5 38 I..C..V....]3..8 00:22:29.533 dh secret: 00:22:29.533 00000000 f7 90 ef 23 77 bd ee 5e a9 f4 34 40 d3 c5 8f e3 ...#w..^..4@.... 00:22:29.533 00000010 56 a9 ae 4f 36 57 b1 88 7b 6a c6 27 fb 6b 38 97 V..O6W..{j.'.k8. 00:22:29.533 00000020 1a 55 34 9a 3e 8c 86 b9 3c 58 f8 42 3e 96 e4 c5 .U4.>...... 00:22:29.533 00000030 73 db 13 0f 3a be 10 fc 10 aa d4 bb 91 f9 3a 4f s...:.........:O 00:22:29.533 00000040 24 84 71 59 f7 4a c1 6f 03 44 aa 09 a3 3c b1 32 $.qY.J.o.D...<.2 00:22:29.533 00000050 b8 94 d3 b5 fb 49 d8 41 cb df be 0a 5e 36 55 98 .....I.A....^6U. 00:22:29.533 00000060 55 73 d2 8a 19 9a 44 97 53 d0 d5 f2 18 9f 25 05 Us....D.S.....%. 00:22:29.533 00000070 80 df 26 ff c4 4b 77 26 2f 08 63 81 4f f7 86 73 ..&..Kw&/.c.O..s 00:22:29.533 00000080 4c e9 fa cb 8f 97 e7 53 95 de 43 7f cb 95 00 2c L......S..C...., 00:22:29.533 00000090 64 22 4c a4 f9 3c ab a8 50 06 c2 95 3c ff d2 05 d"L..<..P...<... 00:22:29.533 000000a0 71 6f c2 1f 45 39 93 c2 b3 96 be d6 27 f7 4e 46 qo..E9......'.NF 00:22:29.533 000000b0 d2 a8 a3 8f 53 ee 03 4d b5 e5 48 ba 7b 1a b3 07 ....S..M..H.{... 00:22:29.533 000000c0 5b e8 1f a2 b4 11 87 47 a7 e7 f1 0f dd c9 76 66 [......G......vf 00:22:29.533 000000d0 df bb 0f 64 ca 1c d8 13 ae e6 d4 f4 a4 fa 45 60 ...d..........E` 00:22:29.533 000000e0 3a 75 6f 24 19 dc 29 77 29 c6 4d 6a 5e 1c 34 43 :uo$..)w).Mj^.4C 00:22:29.533 000000f0 f3 e7 4c a0 22 e0 1c 3b 9f 17 d9 5d 07 97 0f fc ..L."..;...].... 00:22:29.533 [2024-09-27 13:27:21.833554] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key2, hash=3, dhgroup=1, seq=3775755278, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.533 [2024-09-27 13:27:21.833937] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.533 [2024-09-27 13:27:21.837887] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.533 [2024-09-27 13:27:21.838270] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.533 [2024-09-27 13:27:21.838507] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.533 [2024-09-27 13:27:21.838801] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.533 [2024-09-27 13:27:21.937374] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.533 [2024-09-27 13:27:21.937665] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.533 [2024-09-27 13:27:21.937878] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.533 [2024-09-27 13:27:21.938038] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.533 [2024-09-27 13:27:21.938254] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.533 ctrlr pubkey: 00:22:29.533 00000000 63 a1 d8 bb 89 3f cb 88 20 6b 4b 00 f6 e9 89 2d c....?.. kK....- 00:22:29.533 00000010 39 ba b8 a6 f0 26 b1 df 74 a2 ab ac 15 ee e0 8e 9....&..t....... 00:22:29.533 00000020 01 4b 68 d1 42 00 d5 a6 2b 4f ee 31 10 16 ba 4a .Kh.B...+O.1...J 00:22:29.533 00000030 64 8c 09 20 14 d9 5c ce a6 9b 4d 7b 95 12 6b 74 d.. ..\...M{..kt 00:22:29.533 00000040 84 aa 88 83 9a 6f ca b5 bf c5 ef 34 6a 28 ac be .....o.....4j(.. 00:22:29.533 00000050 56 7a 6e c5 4f 79 c4 2d d0 22 0f 3a d3 66 30 58 Vzn.Oy.-.".:.f0X 00:22:29.533 00000060 d7 a9 4b 49 db 04 b0 86 32 fa d4 58 f0 4f 9a 1e ..KI....2..X.O.. 00:22:29.533 00000070 62 3c e5 39 26 e4 9b a0 9e cc 9c ab 6a a7 73 94 b<.9&.......j.s. 00:22:29.533 00000080 52 48 6a c0 4d 13 cb eb e9 f2 c1 38 ec 53 84 9b RHj.M......8.S.. 00:22:29.533 00000090 32 ee 9e 60 01 30 4e 9e a8 ed e3 34 cb 5f 6e 01 2..`.0N....4._n. 00:22:29.533 000000a0 5b 12 f1 cc 49 09 ca fd e2 b5 cc dd 03 8e b4 90 [...I........... 00:22:29.533 000000b0 75 c9 77 7a 5d 2e d9 a2 52 5a 8e 6f ef 0d 77 fb u.wz]...RZ.o..w. 00:22:29.533 000000c0 fb 21 3e a9 53 42 62 d7 10 fe 23 70 a0 39 ea 38 .!>.SBb...#p.9.8 00:22:29.533 000000d0 7e 19 98 dc 6b d6 ce 45 dd cf 73 6b 49 00 7c 20 ~...k..E..skI.| 00:22:29.533 000000e0 69 be 4d d4 a5 9b 9e da 2d d9 0d e2 f9 72 b7 db i.M.....-....r.. 00:22:29.533 000000f0 a0 b0 94 91 d1 55 6e d3 40 a2 a4 3d d4 24 70 a6 .....Un.@..=.$p. 00:22:29.533 host pubkey: 00:22:29.533 00000000 e1 f2 0a 42 14 7f 90 f4 52 f6 d2 f8 66 f7 8b 63 ...B....R...f..c 00:22:29.533 00000010 07 98 25 6b d0 86 3c 06 e5 68 81 c0 74 9f d4 fe ..%k..<..h..t... 00:22:29.533 00000020 02 5c 45 e6 c3 87 88 03 fd 77 aa 43 c2 91 c6 71 .\E......w.C...q 00:22:29.533 00000030 1a 65 bd 5a b9 ab 94 ee 3a 1e 65 d7 be 57 f5 2b .e.Z....:.e..W.+ 00:22:29.533 00000040 38 02 ab 67 e7 eb 53 7a b5 b3 db 85 84 34 58 11 8..g..Sz.....4X. 00:22:29.533 00000050 35 53 2c db 6b e1 4e 73 43 8b 12 50 32 15 96 03 5S,.k.NsC..P2... 00:22:29.533 00000060 4e 78 97 82 0c 4c 73 ad 5b 0b 50 b0 22 fc 56 35 Nx...Ls.[.P.".V5 00:22:29.533 00000070 31 4e 8b f6 40 78 f7 03 f2 5e 17 08 20 36 47 03 1N..@x...^.. 6G. 00:22:29.533 00000080 27 7c 44 8b 1f df 43 f5 23 1a 7a 98 07 7f e2 58 '|D...C.#.z....X 00:22:29.533 00000090 2e 0b ba d4 85 3e 05 21 f8 9d f6 81 ce 39 0e 63 .....>.!.....9.c 00:22:29.533 000000a0 cd 17 2b 09 a5 94 18 05 8a dc 7f a6 d8 54 19 47 ..+..........T.G 00:22:29.534 000000b0 35 24 c5 0f 5f 20 f4 c2 2c df a5 1a cd c7 aa 9b 5$.._ ..,....... 00:22:29.534 000000c0 26 59 14 66 3f fb 4b b8 69 0c 8a 80 bf 4a 64 6b &Y.f?.K.i....Jdk 00:22:29.534 000000d0 bb 07 38 6e d9 28 c0 3f 84 aa a5 e8 81 ba 96 5d ..8n.(.?.......] 00:22:29.534 000000e0 b7 31 8a 11 af 5f 81 9a 89 44 26 34 fb 4c b9 fb .1..._...D&4.L.. 00:22:29.534 000000f0 0b 8a 0e e9 48 01 72 2f e6 98 a6 73 6a 44 72 f8 ....H.r/...sjDr. 00:22:29.534 dh secret: 00:22:29.534 00000000 f5 b1 49 ba 14 24 33 dc 09 5f 43 b7 c4 6e d3 d9 ..I..$3.._C..n.. 00:22:29.534 00000010 ef 0d 31 ea 3c 59 e4 95 62 ff 1d 87 31 67 b5 7d ..1.....9.O:x.X.? 00:22:29.534 000000d0 79 3c be 90 6c 32 1b 96 f1 8b b7 36 57 39 0e 32 y<..l2.....6W9.2 00:22:29.534 000000e0 f1 4d d4 5f 9d 9f f5 2a 9a dc b2 64 ce ce 54 ad .M._...*...d..T. 00:22:29.534 000000f0 34 ff 72 bc 05 34 6f 87 ee 42 07 e4 00 5b d8 ea 4.r..4o..B...[.. 00:22:29.534 [2024-09-27 13:27:21.944895] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key3, hash=3, dhgroup=1, seq=3775755279, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.534 [2024-09-27 13:27:21.945245] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.534 [2024-09-27 13:27:21.949163] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.534 [2024-09-27 13:27:21.949481] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.534 [2024-09-27 13:27:21.949697] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.534 [2024-09-27 13:27:21.949884] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.534 [2024-09-27 13:27:22.001859] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.534 [2024-09-27 13:27:22.002125] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.534 [2024-09-27 13:27:22.002274] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 1 (ffdhe2048) 00:22:29.534 [2024-09-27 13:27:22.002437] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.534 [2024-09-27 13:27:22.002708] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.534 ctrlr pubkey: 00:22:29.534 00000000 63 a1 d8 bb 89 3f cb 88 20 6b 4b 00 f6 e9 89 2d c....?.. kK....- 00:22:29.534 00000010 39 ba b8 a6 f0 26 b1 df 74 a2 ab ac 15 ee e0 8e 9....&..t....... 00:22:29.534 00000020 01 4b 68 d1 42 00 d5 a6 2b 4f ee 31 10 16 ba 4a .Kh.B...+O.1...J 00:22:29.534 00000030 64 8c 09 20 14 d9 5c ce a6 9b 4d 7b 95 12 6b 74 d.. ..\...M{..kt 00:22:29.534 00000040 84 aa 88 83 9a 6f ca b5 bf c5 ef 34 6a 28 ac be .....o.....4j(.. 00:22:29.534 00000050 56 7a 6e c5 4f 79 c4 2d d0 22 0f 3a d3 66 30 58 Vzn.Oy.-.".:.f0X 00:22:29.534 00000060 d7 a9 4b 49 db 04 b0 86 32 fa d4 58 f0 4f 9a 1e ..KI....2..X.O.. 00:22:29.534 00000070 62 3c e5 39 26 e4 9b a0 9e cc 9c ab 6a a7 73 94 b<.9&.......j.s. 00:22:29.534 00000080 52 48 6a c0 4d 13 cb eb e9 f2 c1 38 ec 53 84 9b RHj.M......8.S.. 00:22:29.534 00000090 32 ee 9e 60 01 30 4e 9e a8 ed e3 34 cb 5f 6e 01 2..`.0N....4._n. 00:22:29.534 000000a0 5b 12 f1 cc 49 09 ca fd e2 b5 cc dd 03 8e b4 90 [...I........... 00:22:29.534 000000b0 75 c9 77 7a 5d 2e d9 a2 52 5a 8e 6f ef 0d 77 fb u.wz]...RZ.o..w. 00:22:29.534 000000c0 fb 21 3e a9 53 42 62 d7 10 fe 23 70 a0 39 ea 38 .!>.SBb...#p.9.8 00:22:29.534 000000d0 7e 19 98 dc 6b d6 ce 45 dd cf 73 6b 49 00 7c 20 ~...k..E..skI.| 00:22:29.534 000000e0 69 be 4d d4 a5 9b 9e da 2d d9 0d e2 f9 72 b7 db i.M.....-....r.. 00:22:29.534 000000f0 a0 b0 94 91 d1 55 6e d3 40 a2 a4 3d d4 24 70 a6 .....Un.@..=.$p. 00:22:29.534 host pubkey: 00:22:29.534 00000000 1d cc 26 d8 92 a8 0b c8 e2 58 20 c2 4a b7 55 fb ..&......X .J.U. 00:22:29.534 00000010 bd 2f 8e 5d a2 26 21 ac b6 cc 2d be 09 36 b8 ce ./.].&!...-..6.. 00:22:29.534 00000020 08 5a ae 3f 93 b2 2d ce ee 75 5d 94 bd 1f de ab .Z.?..-..u]..... 00:22:29.534 00000030 39 ff b1 7d 9e 01 62 a7 e0 6b 70 1d 55 c5 b8 88 9..}..b..kp.U... 00:22:29.534 00000040 7a 40 ff e7 c3 a9 1f 1e a6 06 0f a7 05 da 05 28 z@.............( 00:22:29.534 00000050 06 99 de 8e b3 df e1 69 0a f7 87 09 23 36 60 cb .......i....#6`. 00:22:29.534 00000060 86 10 b8 80 06 ee 04 82 67 69 a8 ed 2e a3 e4 18 ........gi...... 00:22:29.534 00000070 d0 79 19 4f 4a 2b e2 74 9f 7b 8a 18 42 29 7d 91 .y.OJ+.t.{..B)}. 00:22:29.534 00000080 de bb c2 5a 47 1c d5 58 a9 db 24 14 90 e4 ef c9 ...ZG..X..$..... 00:22:29.534 00000090 ff b2 90 2c 2e 9c fe 77 ac 70 f6 82 46 f6 b0 d8 ...,...w.p..F... 00:22:29.534 000000a0 de 46 1c da 87 75 63 b1 e4 66 a4 99 af ab 8c fd .F...uc..f...... 00:22:29.534 000000b0 eb 1e c6 48 cd e9 c0 53 2a 2b 54 74 e2 1d 1b 34 ...H...S*+Tt...4 00:22:29.534 000000c0 67 27 b4 b4 be 43 ea 9d 56 6c 89 d7 7e 18 4f 33 g'...C..Vl..~.O3 00:22:29.534 000000d0 0b 02 ee a0 bc 11 ff e8 a0 4c 7b f6 30 44 f2 64 .........L{.0D.d 00:22:29.534 000000e0 b7 81 59 33 8a ba 6c 16 d4 e5 73 d4 99 ea ad b8 ..Y3..l...s..... 00:22:29.534 000000f0 7f 53 58 67 95 bc f2 f5 cb ac a7 1d da e0 7e 8d .SXg..........~. 00:22:29.534 dh secret: 00:22:29.534 00000000 1f 5d 03 09 6b 90 15 17 3e a9 16 ec c1 5f af aa .]..k...>...._.. 00:22:29.534 00000010 65 70 75 fa e8 b3 24 17 35 0a 41 aa ed 5d 4c 78 epu...$.5.A..]Lx 00:22:29.534 00000020 6a b7 9f 71 00 5d 8e 55 6a 6b e3 57 ef 02 f7 55 j..q.].Ujk.W...U 00:22:29.534 00000030 9b cc 47 2a fb 24 c7 89 d6 e1 ca bb de c6 fe d6 ..G*.$.......... 00:22:29.534 00000040 42 58 f5 ef 8b f4 90 d9 7d 6d 6c 51 3c 9d 66 29 BX......}mlQ<.f) 00:22:29.534 00000050 84 c8 d2 08 5b 37 d9 18 e2 aa 91 a3 4b 6c d0 c4 ....[7......Kl.. 00:22:29.534 00000060 3c ea f1 b7 e5 b3 3d b1 05 14 84 29 05 42 ae 92 <.....=....).B.. 00:22:29.534 00000070 2b 3e da f2 71 ff 8b 00 d6 60 e4 46 67 18 1d b7 +>..q....`.Fg... 00:22:29.534 00000080 1d 82 cd 21 8b 38 9a ad 6d 8a f2 a3 cf d1 5f 1b ...!.8..m....._. 00:22:29.534 00000090 65 54 19 eb f7 0d f7 8d cf 35 59 e0 c5 d2 2c 27 eT.......5Y...,' 00:22:29.534 000000a0 0d f4 57 1b c9 d0 64 f9 61 1f f1 b1 0c 6e 27 e9 ..W...d.a....n'. 00:22:29.534 000000b0 62 3b f1 c3 ca 67 c7 48 a6 13 6c ce 0f 91 0a ec b;...g.H..l..... 00:22:29.534 000000c0 21 f1 dc 8c 02 a8 27 06 6d b2 89 e2 b4 94 e8 c8 !.....'.m....... 00:22:29.534 000000d0 88 6a bb 7e 90 f9 5f 18 25 de 6f 70 dd ae 5e 1b .j.~.._.%.op..^. 00:22:29.534 000000e0 f9 a9 91 ee ca 28 1d b2 c9 a8 f8 9b e2 ca d9 14 .....(.......... 00:22:29.534 000000f0 fc 5e 49 42 81 1d 42 e8 c0 96 8d 9b e7 d7 7c 75 .^IB..B.......|u 00:22:29.534 [2024-09-27 13:27:22.009700] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key3, hash=3, dhgroup=1, seq=3775755280, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.534 [2024-09-27 13:27:22.009992] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.534 [2024-09-27 13:27:22.014151] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.534 [2024-09-27 13:27:22.014593] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.534 [2024-09-27 13:27:22.014751] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.534 [2024-09-27 13:27:22.015062] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.534 [2024-09-27 13:27:22.109803] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.534 [2024-09-27 13:27:22.110087] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.534 [2024-09-27 13:27:22.110393] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.534 [2024-09-27 13:27:22.110504] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.534 [2024-09-27 13:27:22.110871] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.534 ctrlr pubkey: 00:22:29.534 00000000 8d da 71 08 3f 87 5e 5a 32 b9 ce 93 74 87 23 1a ..q.?.^Z2...t.#. 00:22:29.534 00000010 b2 d3 17 98 05 1a 58 f2 2e e1 c1 21 49 13 40 b2 ......X....!I.@. 00:22:29.534 00000020 38 70 a9 a6 2e fa 1b b3 20 cd ed 53 f9 86 30 76 8p...... ..S..0v 00:22:29.534 00000030 5b 56 a8 32 25 17 40 f8 7e aa 23 bd 0e 2c d1 fa [V.2%.@.~.#..,.. 00:22:29.534 00000040 46 db 72 fa 86 45 fa ee 8b c2 0e b5 15 d3 a2 c9 F.r..E.......... 00:22:29.534 00000050 51 ce 8c 4d 93 62 ee ac 19 21 1e 6e 97 a5 09 af Q..M.b...!.n.... 00:22:29.534 00000060 b9 28 ab 34 34 48 62 97 b8 5d ea 57 69 5f fa 67 .(.44Hb..].Wi_.g 00:22:29.534 00000070 02 1c 71 91 3e 1b 0a c2 de 12 08 ac b0 c0 9a e4 ..q.>........... 00:22:29.534 00000080 21 52 ad 4c f4 a3 1b c2 57 15 ce 2d 8b 75 fd 0f !R.L....W..-.u.. 00:22:29.534 00000090 b1 ff 8b c5 a6 91 d7 ca c1 fa 38 e9 24 2f 65 7c ..........8.$/e| 00:22:29.534 000000a0 73 c2 7b ac d5 0c 0e 69 37 22 61 02 39 a4 5e 1f s.{....i7"a.9.^. 00:22:29.534 000000b0 a5 4d a6 6a a6 e9 0b 3b 12 3c 49 9d d9 22 d5 3a .M.j...;..e. 00:22:29.534 00000030 83 23 8d 2f ec b6 86 07 92 f4 2b c9 44 1d 06 02 .#./......+.D... 00:22:29.534 00000040 2e e9 92 81 cb 49 e2 fe 8b 3d 0b 60 35 a1 41 ec .....I...=.`5.A. 00:22:29.534 00000050 6e 4c 9b 55 96 b8 e6 0a 18 10 68 85 a1 c1 24 ef nL.U......h...$. 00:22:29.534 00000060 27 1c d7 91 ab 40 40 7f a8 e2 74 b0 4b fc 34 f6 '....@@...t.K.4. 00:22:29.534 00000070 18 ba 46 8d a2 81 7e 38 78 4c 46 b5 06 41 78 65 ..F...~8xLF..Axe 00:22:29.534 00000080 53 e0 36 68 a9 26 d1 bf c4 93 a9 8b 08 e4 2b 9b S.6h.&........+. 00:22:29.534 00000090 03 75 9f 35 e4 26 af f6 5e de 17 c5 7d 7f cd 00 .u.5.&..^...}... 00:22:29.534 000000a0 55 e1 f0 30 85 7f 7b 5c 40 63 2b 0f 45 bd 8a bf U..0..{\@c+.E... 00:22:29.534 000000b0 91 26 75 4c 4e ce ed 95 af e2 cc 2c 32 f8 47 69 .&uLN......,2.Gi 00:22:29.534 000000c0 9b 87 13 78 53 ad 3a cd d0 19 ab 49 2a 93 20 1a ...xS.:....I*. . 00:22:29.534 000000d0 ab 0b 93 5d 3b bd 6a d0 87 3e 5c 14 97 fd b8 82 ...];.j..>\..... 00:22:29.534 000000e0 8e 73 17 4b 76 04 9e 8a 1e a3 53 c6 4d 86 e7 ed .s.Kv.....S.M... 00:22:29.534 000000f0 a0 8b 7c 74 86 3f 64 c1 f8 62 03 c0 bf 46 f5 15 ..|t.?d..b...F.. 00:22:29.534 dh secret: 00:22:29.534 00000000 dd d1 0c c3 df 30 46 df 48 56 f4 dd 69 c6 fc 4b .....0F.HV..i..K 00:22:29.534 00000010 d3 8c 1f 0c 95 88 07 3c bc 41 5a 57 85 cc a1 51 .......<.AZW...Q 00:22:29.534 00000020 78 1f fa 05 f0 10 5d 98 bd d0 ff 8b c1 38 0d 29 x.....]......8.) 00:22:29.534 00000030 1c eb 29 6d aa 0e 07 e7 7c 77 01 fc c8 db 4a 58 ..)m....|w....JX 00:22:29.534 00000040 29 a4 53 31 8c 60 d9 6a e8 1e d8 86 74 7e a5 a2 ).S1.`.j....t~.. 00:22:29.534 00000050 03 46 95 11 8c e1 5b f5 e2 79 7f 0b b0 37 5b 8d .F....[..y...7[. 00:22:29.534 00000060 9c 6e db c9 ef 57 3d 4a 70 14 d3 e1 71 2c c4 91 .n...W=Jp...q,.. 00:22:29.534 00000070 47 c7 64 82 7c 9e 4b 92 58 bb 6e 67 28 4e 5f aa G.d.|.K.X.ng(N_. 00:22:29.534 00000080 ad a5 df 10 e3 0d 34 18 a8 d0 b3 36 f9 cb 3a 9e ......4....6..:. 00:22:29.535 00000090 2b d8 ae 34 23 51 50 2e ea fe e9 7a 96 8b 47 3d +..4#QP....z..G= 00:22:29.535 000000a0 2d 43 da 1e 1c 60 50 93 b2 2d c8 94 ed c5 f0 d5 -C...`P..-...... 00:22:29.535 000000b0 81 61 31 6c b7 a2 19 97 26 5a ea 44 04 d8 90 bf .a1l....&Z.D.... 00:22:29.535 000000c0 34 1b 89 90 54 12 c2 e6 57 20 12 53 fd bb be 2b 4...T...W .S...+ 00:22:29.535 000000d0 84 32 44 74 7b 99 fa 57 88 00 98 fd 4d e4 24 1d .2Dt{..W....M.$. 00:22:29.535 000000e0 84 b9 3e 16 89 8c f7 a1 1b f1 79 a2 0b 68 dd 01 ..>.......y..h.. 00:22:29.535 000000f0 97 97 2b 68 7d 4c 0d b1 3c f2 84 96 dd 7c 14 ed ..+h}L..<....|.. 00:22:29.535 [2024-09-27 13:27:22.117657] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key4, hash=3, dhgroup=1, seq=3775755281, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.535 [2024-09-27 13:27:22.117940] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.535 [2024-09-27 13:27:22.121944] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.535 [2024-09-27 13:27:22.122295] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.535 [2024-09-27 13:27:22.122555] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.535 [2024-09-27 13:27:22.174268] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.535 [2024-09-27 13:27:22.174559] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.535 [2024-09-27 13:27:22.174756] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 1 (ffdhe2048) 00:22:29.535 [2024-09-27 13:27:22.174882] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.535 [2024-09-27 13:27:22.175149] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.535 ctrlr pubkey: 00:22:29.535 00000000 8d da 71 08 3f 87 5e 5a 32 b9 ce 93 74 87 23 1a ..q.?.^Z2...t.#. 00:22:29.535 00000010 b2 d3 17 98 05 1a 58 f2 2e e1 c1 21 49 13 40 b2 ......X....!I.@. 00:22:29.535 00000020 38 70 a9 a6 2e fa 1b b3 20 cd ed 53 f9 86 30 76 8p...... ..S..0v 00:22:29.535 00000030 5b 56 a8 32 25 17 40 f8 7e aa 23 bd 0e 2c d1 fa [V.2%.@.~.#..,.. 00:22:29.535 00000040 46 db 72 fa 86 45 fa ee 8b c2 0e b5 15 d3 a2 c9 F.r..E.......... 00:22:29.535 00000050 51 ce 8c 4d 93 62 ee ac 19 21 1e 6e 97 a5 09 af Q..M.b...!.n.... 00:22:29.535 00000060 b9 28 ab 34 34 48 62 97 b8 5d ea 57 69 5f fa 67 .(.44Hb..].Wi_.g 00:22:29.535 00000070 02 1c 71 91 3e 1b 0a c2 de 12 08 ac b0 c0 9a e4 ..q.>........... 00:22:29.535 00000080 21 52 ad 4c f4 a3 1b c2 57 15 ce 2d 8b 75 fd 0f !R.L....W..-.u.. 00:22:29.535 00000090 b1 ff 8b c5 a6 91 d7 ca c1 fa 38 e9 24 2f 65 7c ..........8.$/e| 00:22:29.535 000000a0 73 c2 7b ac d5 0c 0e 69 37 22 61 02 39 a4 5e 1f s.{....i7"a.9.^. 00:22:29.535 000000b0 a5 4d a6 6a a6 e9 0b 3b 12 3c 49 9d d9 22 d5 3a .M.j...;.aa..x..T.I...~. 00:22:29.535 00000040 56 a8 79 ae 11 2e f8 98 c0 36 63 4c 64 67 42 a5 V.y......6cLdgB. 00:22:29.535 00000050 d0 bd 51 15 d0 c9 bf 19 22 7a 58 9c 14 53 6d bb ..Q....."zX..Sm. 00:22:29.535 00000060 b0 42 09 3c 20 8d b9 aa 0f 4c 9f aa 37 68 48 5b .B.< ....L..7hH[ 00:22:29.535 00000070 be 05 9d 42 f9 67 8d f9 b6 26 52 74 01 b5 55 73 ...B.g...&Rt..Us 00:22:29.535 00000080 ab f0 67 2e 0c 25 d9 2a 11 eb c4 5c 20 ff 84 46 ..g..%.*...\ ..F 00:22:29.535 00000090 a0 a2 e2 06 f3 da 45 d4 36 ac 2c 90 ec 26 21 7a ......E.6.,..&!z 00:22:29.535 000000a0 20 91 f3 40 96 ad e7 dc 53 4f 71 88 07 7d 99 d1 ..@....SOq..}.. 00:22:29.535 000000b0 8f 15 60 e7 34 e9 57 88 ac 8a a1 02 fa 43 ed b0 ..`.4.W......C.. 00:22:29.535 000000c0 2e 57 2d 1e 6b ea 6c eb 7d f9 b2 33 08 e7 ce 8e .W-.k.l.}..3.... 00:22:29.535 000000d0 4e 10 a7 5b c9 c0 53 1e 40 9a 7a ac fe c1 45 5c N..[..S.@.z...E\ 00:22:29.535 000000e0 23 d1 3c bd 86 f5 26 4b 36 b6 85 55 ed c3 87 a6 #.<...&K6..U.... 00:22:29.535 000000f0 a9 4f 68 d3 db 2f de 04 be 60 d9 05 73 05 ee 46 .Oh../...`..s..F 00:22:29.535 dh secret: 00:22:29.535 00000000 42 d4 a8 80 8f 3c 90 48 86 be ba ed 43 de e7 9c B....<.H....C... 00:22:29.535 00000010 b9 3b 63 82 27 24 f4 2c d0 20 1f 77 36 68 86 24 .;c.'$.,. .w6h.$ 00:22:29.535 00000020 2d 5d 84 b3 30 e6 0b 69 66 74 12 1b 19 b1 79 9e -]..0..ift....y. 00:22:29.535 00000030 0e 3f 49 f1 68 8f f7 73 80 1a c8 11 25 62 c8 5f .?I.h..s....%b._ 00:22:29.535 00000040 e8 f5 bf 84 d2 67 5e bf 12 0b 53 50 6a 11 79 cf .....g^...SPj.y. 00:22:29.535 00000050 d4 a4 9a 6d e2 24 7b a4 51 6a 0c 94 aa 9d 22 89 ...m.${.Qj....". 00:22:29.535 00000060 fb 4f 0b 87 19 4b 65 6f 68 3e bf 02 91 96 60 1d .O...Keoh>....`. 00:22:29.535 00000070 4b 10 d3 dc fe 28 f5 83 5b a4 7e 7e 92 33 4d 02 K....(..[.~~.3M. 00:22:29.535 00000080 a2 2b 74 24 bf 8b e3 c1 6b 19 eb e5 73 bb 79 aa .+t$....k...s.y. 00:22:29.535 00000090 23 4d 3d fe 10 9b 65 50 f7 ba 4e bb 82 4e 23 0f #M=...eP..N..N#. 00:22:29.535 000000a0 a7 28 55 44 ca 58 4a a3 7f 6d 33 cf 5c 0f f8 3d .(UD.XJ..m3.\..= 00:22:29.535 000000b0 23 05 a7 35 e4 b3 38 f5 f3 fd 2d 5d 1d 99 6a 83 #..5..8...-]..j. 00:22:29.535 000000c0 6c c5 7d ef be 05 e9 11 eb 90 49 29 49 aa d5 3f l.}.......I)I..? 00:22:29.535 000000d0 b6 ba a7 02 1a fd 80 24 80 f7 99 5f e2 85 14 c9 .......$..._.... 00:22:29.535 000000e0 18 03 d9 59 b5 30 c6 77 4a ea cf 21 e9 d2 18 d4 ...Y.0.wJ..!.... 00:22:29.535 000000f0 40 d5 20 37 75 65 62 db 42 97 fc 6e 5c 01 44 a2 @. 7ueb.B..n\.D. 00:22:29.535 [2024-09-27 13:27:22.182531] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key4, hash=3, dhgroup=1, seq=3775755282, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.535 [2024-09-27 13:27:22.182888] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.535 [2024-09-27 13:27:22.186910] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.535 [2024-09-27 13:27:22.187242] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.535 [2024-09-27 13:27:22.187396] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.535 [2024-09-27 13:27:22.284876] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.535 [2024-09-27 13:27:22.285098] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.535 [2024-09-27 13:27:22.285291] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 2 (ffdhe3072) 00:22:29.535 [2024-09-27 13:27:22.285509] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.535 [2024-09-27 13:27:22.285754] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.535 ctrlr pubkey: 00:22:29.535 00000000 63 a0 09 97 fd 9f bf 66 50 d2 e9 12 0b 1f 74 dd c......fP.....t. 00:22:29.535 00000010 25 cc 13 b3 35 a5 73 ae a0 58 e8 c5 79 87 fc 13 %...5.s..X..y... 00:22:29.535 00000020 e7 31 2d b2 3c e2 1e 49 10 3c 78 b7 1c 1e ac 63 .1-.<..I.. 00:22:29.535 00000040 4d b1 a1 8e b1 6b 0e 2a 48 7a 78 1f 50 74 1c d2 M....k.*Hzx.Pt.. 00:22:29.535 00000050 f9 16 cb 35 23 aa 53 24 d5 4d af 9b 9a 20 35 ed ...5#.S$.M... 5. 00:22:29.535 00000060 69 39 e9 06 b3 4e 15 a2 0d 6f 9a 1f 26 9d e4 20 i9...N...o..&.. 00:22:29.535 00000070 12 86 63 40 9d 2e aa 9c 41 58 fd 31 e6 59 24 97 ..c@....AX.1.Y$. 00:22:29.535 00000080 3a eb 54 88 b9 ce b3 6e 4b 24 53 62 a5 31 ec 09 :.T....nK$Sb.1.. 00:22:29.535 00000090 5b 66 7b 29 e1 67 63 67 50 b9 8a 14 dc fa af 2a [f{).gcgP......* 00:22:29.535 000000a0 04 b1 79 ab fe f1 9b 58 2f 48 c6 ba da 2f 7d f7 ..y....X/H.../}. 00:22:29.535 000000b0 8a 1a f0 ca 06 13 f5 8f 27 22 b3 8c 86 ee 91 a4 ........'"...... 00:22:29.535 000000c0 7b 69 6b c3 93 a2 45 12 0a 6c b2 c7 4f d1 a9 3d {ik...E..l..O..= 00:22:29.535 000000d0 7e 4d 9d fb c0 77 9e 48 c3 b0 d4 fb 25 c0 1d cb ~M...w.H....%... 00:22:29.535 000000e0 c2 6e 8a 12 31 a8 e6 61 6a c6 e6 62 59 b6 8c 8d .n..1..aj..bY... 00:22:29.535 000000f0 93 10 f7 ed 68 40 f0 1d c3 c4 7f 63 15 08 be 42 ....h@.....c...B 00:22:29.535 00000100 24 a8 78 fc 15 37 ce 9f bf ac 66 65 a0 2d 5e b5 $.x..7....fe.-^. 00:22:29.535 00000110 86 4c 83 2d 17 7c f9 68 06 2a 67 5a 69 f8 34 11 .L.-.|.h.*gZi.4. 00:22:29.535 00000120 14 41 55 a3 de 8f 62 88 1b a1 52 20 9f 57 ec 6a .AU...b...R .W.j 00:22:29.535 00000130 d6 43 ae 01 f6 2a 65 bf 58 55 ee bb 70 24 b5 c9 .C...*e.XU..p$.. 00:22:29.535 00000140 a7 85 44 48 1a a5 2c e9 b3 63 a2 cc 78 ee eb 91 ..DH..,..c..x... 00:22:29.535 00000150 a0 f7 b8 47 5a e3 d6 da 04 6a ce e9 0a 6b a1 eb ...GZ....j...k.. 00:22:29.535 00000160 a1 68 66 43 74 05 1d 03 75 cd a0 ee 44 a0 20 6f .hfCt...u...D. o 00:22:29.535 00000170 5c 2b eb 31 f7 3c b6 22 66 b6 40 d6 8c aa 32 42 \+.1.<."f.@...2B 00:22:29.535 host pubkey: 00:22:29.535 00000000 0c dc b9 6e f0 90 51 8f 0e 8d 46 1a ef 60 9f bf ...n..Q...F..`.. 00:22:29.535 00000010 7b 8a ee 42 68 48 aa 79 0d 11 e7 6a a5 93 37 be {..BhH.y...j..7. 00:22:29.535 00000020 eb 2d 63 68 67 e3 cc 9f 1b ad b1 5f 69 77 1e 08 .-chg......_iw.. 00:22:29.535 00000030 76 3a 15 88 9e 7d c4 56 e4 04 d6 fd 85 f9 fc 55 v:...}.V.......U 00:22:29.535 00000040 43 48 d1 b9 f2 2e 20 28 3f e5 22 bd 1c 0e c9 f2 CH.... (?."..... 00:22:29.535 00000050 67 02 64 25 9c fd 49 7c 88 75 b1 65 a2 f6 3b 4e g.d%..I|.u.e..;N 00:22:29.535 00000060 82 40 8f e0 bb 4c 7c e4 23 61 06 95 e3 e3 75 fe .@...L|.#a....u. 00:22:29.535 00000070 90 d2 e0 52 f9 5e b6 81 01 24 13 0c be 00 97 08 ...R.^...$...... 00:22:29.535 00000080 70 86 1e b3 eb ae 39 b4 2c 67 ef 5d 9a d3 72 16 p.....9.,g.]..r. 00:22:29.535 00000090 f8 76 6f 64 f1 82 2c ef dc 7f a0 9f 4d 52 91 78 .vod..,.....MR.x 00:22:29.535 000000a0 56 4c 30 9f 8c 90 07 44 66 b5 45 20 f6 60 8c a4 VL0....Df.E .`.. 00:22:29.535 000000b0 a5 32 fa 54 05 3f 66 a8 9a 17 7d b5 65 4a f5 dd .2.T.?f...}.eJ.. 00:22:29.535 000000c0 33 db 8a 6e bf 10 04 57 6a f4 81 ae 86 f5 78 15 3..n...Wj.....x. 00:22:29.535 000000d0 83 10 29 4c 0a b0 3f cb c3 7b ff 45 64 48 67 70 ..)L..?..{.EdHgp 00:22:29.535 000000e0 a4 4b 41 c6 f5 83 15 a6 95 8c 3b a8 55 11 53 70 .KA.......;.U.Sp 00:22:29.535 000000f0 a4 d0 20 ac d4 2f a6 1b 7f 14 78 34 c6 17 29 15 .. ../....x4..). 00:22:29.535 00000100 a9 59 1f a7 2a 6c 0f e8 e6 4e 6b 41 22 ae 96 b9 .Y..*l...NkA"... 00:22:29.535 00000110 27 c5 ee f2 87 bf 7e 4d 33 d8 e2 8c e6 91 bf 3f '.....~M3......? 00:22:29.535 00000120 b3 f1 85 fa bc 91 d1 b2 60 e5 90 21 d7 10 ce e0 ........`..!.... 00:22:29.535 00000130 bc d7 cd 2a 2c 45 c2 c0 b3 7e ab 94 9c d7 db 1c ...*,E...~...... 00:22:29.535 00000140 00 8d ce 99 1f cd 7a bd 71 7d 0c 5c c2 02 7a 79 ......z.q}.\..zy 00:22:29.535 00000150 10 09 44 87 4a 39 3d 68 6a 32 2e 70 98 bc 61 19 ..D.J9=hj2.p..a. 00:22:29.535 00000160 8b f6 81 35 2f 55 35 e1 12 46 70 f2 49 73 59 88 ...5/U5..Fp.IsY. 00:22:29.535 00000170 1f 2f 57 e5 d2 12 9c 60 2e 52 7c 16 14 87 36 56 ./W....`.R|...6V 00:22:29.535 dh secret: 00:22:29.535 00000000 7e 55 70 09 76 49 9a cc b9 09 b7 a5 99 70 97 96 ~Up.vI.......p.. 00:22:29.535 00000010 20 80 ff 22 17 1f e7 bd 10 7c b9 ee b3 e5 97 1d ..".....|...... 00:22:29.535 00000020 4f e0 2c 83 a5 ed b7 17 e3 4e bc a1 95 23 76 c4 O.,......N...#v. 00:22:29.535 00000030 64 d2 3d bf 9f 75 94 d3 fb 21 db 05 93 d2 20 df d.=..u...!.... . 00:22:29.535 00000040 a9 87 71 cf ef 60 9c bb aa 51 d8 cb f3 d7 46 e1 ..q..`...Q....F. 00:22:29.535 00000050 2c 61 88 1a f5 07 7d 6b c5 b9 dc fe 3e 00 8b 8f ,a....}k....>... 00:22:29.535 00000060 71 8a 80 f5 b3 4d 33 60 b2 62 92 00 52 e5 3e 55 q....M3`.b..R.>U 00:22:29.535 00000070 bd 61 53 12 38 81 d2 9c 78 db 75 c3 45 2a 82 09 .aS.8...x.u.E*.. 00:22:29.535 00000080 98 0b c1 00 82 4f bb f3 68 4c 96 ed fe 94 1b 96 .....O..hL...... 00:22:29.535 00000090 59 95 16 fd e3 a7 e3 ab db 37 7b d1 96 03 ab 3c Y........7{....< 00:22:29.535 000000a0 62 13 c3 e3 57 81 e2 83 2f 61 69 c2 b2 63 1e 05 b...W.../ai..c.. 00:22:29.535 000000b0 eb 45 8d 3f 48 6d db 33 f4 be 55 45 23 82 dc 9f .E.?Hm.3..UE#... 00:22:29.535 000000c0 f7 d9 04 43 0e b5 d9 06 b3 03 11 5d 74 95 b5 47 ...C.......]t..G 00:22:29.535 000000d0 03 a3 92 98 e9 75 b5 02 f8 0d 33 b0 b7 f8 6a 3a .....u....3...j: 00:22:29.535 000000e0 1d 37 80 e1 02 f2 3b f1 da b0 04 1c c1 30 90 dd .7....;......0.. 00:22:29.535 000000f0 f3 76 c8 13 66 83 d7 0e c9 dc 42 93 e7 bd 48 18 .v..f.....B...H. 00:22:29.535 00000100 e3 24 70 fe 9d db 77 fe d2 e1 be 53 5d 19 67 5a .$p...w....S].gZ 00:22:29.535 00000110 5e 1c 37 7a 52 dc 8c 5a 20 91 e9 fd d2 45 c3 6b ^.7zR..Z ....E.k 00:22:29.535 00000120 ac 49 bb c8 2d c5 83 26 98 89 1e 83 c2 d6 cf b2 .I..-..&........ 00:22:29.535 00000130 c9 1a 9b 28 66 ad c2 22 2f d8 c0 43 60 75 23 3b ...(f.."/..C`u#; 00:22:29.535 00000140 80 ab c0 20 fe 1e 9e 84 c4 48 79 f9 79 44 7b 1f ... .....Hy.yD{. 00:22:29.535 00000150 b7 e4 9f 3c ac 0c 82 d5 73 99 c8 6c 5c 37 dc cd ...<....s..l\7.. 00:22:29.536 00000160 89 86 95 a5 91 db 73 9c f8 84 ee 6a d9 2e a6 09 ......s....j.... 00:22:29.536 00000170 29 15 66 42 a1 7f a4 19 e0 55 a6 84 64 b4 fe 28 ).fB.....U..d..( 00:22:29.536 [2024-09-27 13:27:22.300722] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key0, hash=3, dhgroup=2, seq=3775755283, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.536 [2024-09-27 13:27:22.301119] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.536 [2024-09-27 13:27:22.308643] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.536 [2024-09-27 13:27:22.309015] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.536 [2024-09-27 13:27:22.309201] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.536 [2024-09-27 13:27:22.309444] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.536 [2024-09-27 13:27:22.361255] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.536 [2024-09-27 13:27:22.361426] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.536 [2024-09-27 13:27:22.361669] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.536 [2024-09-27 13:27:22.361834] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.536 [2024-09-27 13:27:22.362077] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.536 ctrlr pubkey: 00:22:29.536 00000000 63 a0 09 97 fd 9f bf 66 50 d2 e9 12 0b 1f 74 dd c......fP.....t. 00:22:29.536 00000010 25 cc 13 b3 35 a5 73 ae a0 58 e8 c5 79 87 fc 13 %...5.s..X..y... 00:22:29.536 00000020 e7 31 2d b2 3c e2 1e 49 10 3c 78 b7 1c 1e ac 63 .1-.<..I.. 00:22:29.536 00000040 4d b1 a1 8e b1 6b 0e 2a 48 7a 78 1f 50 74 1c d2 M....k.*Hzx.Pt.. 00:22:29.536 00000050 f9 16 cb 35 23 aa 53 24 d5 4d af 9b 9a 20 35 ed ...5#.S$.M... 5. 00:22:29.536 00000060 69 39 e9 06 b3 4e 15 a2 0d 6f 9a 1f 26 9d e4 20 i9...N...o..&.. 00:22:29.536 00000070 12 86 63 40 9d 2e aa 9c 41 58 fd 31 e6 59 24 97 ..c@....AX.1.Y$. 00:22:29.536 00000080 3a eb 54 88 b9 ce b3 6e 4b 24 53 62 a5 31 ec 09 :.T....nK$Sb.1.. 00:22:29.536 00000090 5b 66 7b 29 e1 67 63 67 50 b9 8a 14 dc fa af 2a [f{).gcgP......* 00:22:29.536 000000a0 04 b1 79 ab fe f1 9b 58 2f 48 c6 ba da 2f 7d f7 ..y....X/H.../}. 00:22:29.536 000000b0 8a 1a f0 ca 06 13 f5 8f 27 22 b3 8c 86 ee 91 a4 ........'"...... 00:22:29.536 000000c0 7b 69 6b c3 93 a2 45 12 0a 6c b2 c7 4f d1 a9 3d {ik...E..l..O..= 00:22:29.536 000000d0 7e 4d 9d fb c0 77 9e 48 c3 b0 d4 fb 25 c0 1d cb ~M...w.H....%... 00:22:29.536 000000e0 c2 6e 8a 12 31 a8 e6 61 6a c6 e6 62 59 b6 8c 8d .n..1..aj..bY... 00:22:29.536 000000f0 93 10 f7 ed 68 40 f0 1d c3 c4 7f 63 15 08 be 42 ....h@.....c...B 00:22:29.536 00000100 24 a8 78 fc 15 37 ce 9f bf ac 66 65 a0 2d 5e b5 $.x..7....fe.-^. 00:22:29.536 00000110 86 4c 83 2d 17 7c f9 68 06 2a 67 5a 69 f8 34 11 .L.-.|.h.*gZi.4. 00:22:29.536 00000120 14 41 55 a3 de 8f 62 88 1b a1 52 20 9f 57 ec 6a .AU...b...R .W.j 00:22:29.536 00000130 d6 43 ae 01 f6 2a 65 bf 58 55 ee bb 70 24 b5 c9 .C...*e.XU..p$.. 00:22:29.536 00000140 a7 85 44 48 1a a5 2c e9 b3 63 a2 cc 78 ee eb 91 ..DH..,..c..x... 00:22:29.536 00000150 a0 f7 b8 47 5a e3 d6 da 04 6a ce e9 0a 6b a1 eb ...GZ....j...k.. 00:22:29.536 00000160 a1 68 66 43 74 05 1d 03 75 cd a0 ee 44 a0 20 6f .hfCt...u...D. o 00:22:29.536 00000170 5c 2b eb 31 f7 3c b6 22 66 b6 40 d6 8c aa 32 42 \+.1.<."f.@...2B 00:22:29.536 host pubkey: 00:22:29.536 00000000 7c 33 c5 42 91 aa a8 20 84 18 d1 38 52 1c f3 98 |3.B... ...8R... 00:22:29.536 00000010 85 20 19 49 0a 47 0f 50 fd 90 70 05 3e 99 ee 04 . .I.G.P..p.>... 00:22:29.536 00000020 fb 95 63 60 e8 c7 06 e8 08 0e f2 c2 3e 8d 02 a0 ..c`........>... 00:22:29.536 00000030 a6 f4 01 c4 b0 27 7a 7a a8 07 09 85 ce 51 b2 4b .....'zz.....Q.K 00:22:29.536 00000040 84 95 b2 1c 88 1a 3e c9 1f 69 24 fe 3e 64 eb 53 ......>..i$.>d.S 00:22:29.536 00000050 5a 0d 83 fe 67 31 95 b2 84 a6 82 81 05 7e c1 8e Z...g1.......~.. 00:22:29.536 00000060 65 21 fe 47 be 3d 92 6a 22 24 f5 fd 11 96 ff 62 e!.G.=.j"$.....b 00:22:29.536 00000070 52 56 a3 83 26 16 1b 37 af 7b 69 97 46 5c b6 c1 RV..&..7.{i.F\.. 00:22:29.536 00000080 c3 52 c6 33 36 6c fb a9 4b 7a d1 03 5b 02 a3 ab .R.36l..Kz..[... 00:22:29.536 00000090 9a 96 4a 7b 71 63 4b 79 92 39 0a b1 e8 c0 56 86 ..J{qcKy.9....V. 00:22:29.536 000000a0 db 5f dc 42 16 83 ee 67 78 86 7c d7 4b c8 e1 e4 ._.B...gx.|.K... 00:22:29.536 000000b0 b0 f5 25 c3 e6 3d be 2e 31 9e 02 ae 66 31 3e 78 ..%..=..1...f1>x 00:22:29.536 000000c0 31 e4 93 b4 a9 6a e7 81 95 1a b5 7f dd 98 36 b4 1....j........6. 00:22:29.536 000000d0 85 b2 3a 38 e7 0c cb 43 ed f8 56 b6 88 23 24 b5 ..:8...C..V..#$. 00:22:29.536 000000e0 0e 72 05 8d 4f 36 e5 5a c8 d2 4a 55 9c e0 64 ac .r..O6.Z..JU..d. 00:22:29.536 000000f0 b3 18 4c 0e 5f bd 4f 51 45 10 fa 38 d3 b7 09 1d ..L._.OQE..8.... 00:22:29.536 00000100 73 7d 96 f8 2f d4 5d f7 f5 68 66 46 33 23 46 2e s}../.]..hfF3#F. 00:22:29.536 00000110 ea ec f2 59 ab 9c 7f eb aa 0b b0 66 0a 9d 0b fd ...Y.......f.... 00:22:29.536 00000120 0c a4 33 11 c8 09 da 2b 56 a9 67 8f 6a 6b 89 b0 ..3....+V.g.jk.. 00:22:29.536 00000130 44 04 4e ff 90 fa 3e e2 4e bf bb 93 c9 37 fe ba D.N...>.N....7.. 00:22:29.536 00000140 4b 6d ac 34 62 a7 85 f5 45 23 6d b3 1c bc 43 71 Km.4b...E#m...Cq 00:22:29.536 00000150 75 e9 cf 10 04 e8 95 12 17 f0 4d a0 ff 54 d4 1b u.........M..T.. 00:22:29.536 00000160 95 39 88 2c d8 56 6b e2 72 ef 3b f6 9a 7d e1 97 .9.,.Vk.r.;..}.. 00:22:29.536 00000170 d4 63 64 07 43 db 07 8e 9b b0 51 2a ad 5a 5e fe .cd.C.....Q*.Z^. 00:22:29.536 dh secret: 00:22:29.536 00000000 45 0b 03 c4 73 6c 6f 94 fa 8f 8c a2 d9 b2 c3 96 E...slo......... 00:22:29.536 00000010 f8 d9 15 42 4a 15 ba 58 d1 d9 d9 e2 a5 3b 6b ce ...BJ..X.....;k. 00:22:29.536 00000020 80 23 53 d4 f1 bd 44 0f b6 8e 87 de 20 d0 6e 10 .#S...D..... .n. 00:22:29.536 00000030 96 d5 d8 6f 09 15 5d d7 35 c9 7f a6 5c 65 af 04 ...o..].5...\e.. 00:22:29.536 00000040 8f 7a d3 f5 d3 4e 93 28 94 e0 57 e3 43 67 18 68 .z...N.(..W.Cg.h 00:22:29.536 00000050 2b 83 0c 9c d4 cb 9d b4 99 7b 02 b3 84 7b d8 d1 +........{...{.. 00:22:29.536 00000060 fd 58 b2 0c e5 c0 b9 5f 4d e9 b0 55 d8 f6 3a 90 .X....._M..U..:. 00:22:29.536 00000070 38 b9 5c dd 0d 14 cf 9f d2 10 7f 72 bc f1 0e 63 8.\........r...c 00:22:29.536 00000080 e5 d5 06 62 c2 32 13 cd b0 f2 60 d0 b7 12 34 42 ...b.2....`...4B 00:22:29.536 00000090 76 85 12 b4 46 c8 d3 6b fb 79 a1 d2 a3 05 16 c9 v...F..k.y...... 00:22:29.536 000000a0 c5 86 0f f3 9a 8a f0 9b 8b 53 a5 b1 48 40 35 d5 .........S..H@5. 00:22:29.536 000000b0 cf c9 98 8a 29 13 d3 f3 11 8d 20 9b aa 6c e6 09 ....)..... ..l.. 00:22:29.536 000000c0 79 5e 48 3b 47 4c 89 54 b4 ea 89 0e 06 13 ba c3 y^H;GL.T........ 00:22:29.536 000000d0 35 66 d7 27 7d b5 40 87 73 9c 0b 90 47 ce 53 0f 5f.'}.@.s...G.S. 00:22:29.536 000000e0 04 0b 92 ae 00 14 cb 3c 87 48 0a 03 35 d4 30 a0 .......<.H..5.0. 00:22:29.536 000000f0 b8 df 60 dd 94 08 9d ce 0d 73 20 fd 4c 15 db 21 ..`......s .L..! 00:22:29.536 00000100 72 78 83 af 21 8d de f0 d5 75 b9 d5 a9 99 87 33 rx..!....u.....3 00:22:29.536 00000110 d0 19 1b e9 8a b5 92 15 8d ec 2f 92 10 23 6a f5 ........../..#j. 00:22:29.536 00000120 54 f1 e8 a7 3b c3 d0 a4 a6 3e 5c 2d 8d 7f 92 4c T...;....>\-...L 00:22:29.536 00000130 bb a4 e8 ee ba ba 56 09 6c f6 00 19 19 21 e7 a8 ......V.l....!.. 00:22:29.536 00000140 45 ee 15 d2 03 00 24 69 8f 74 96 8c 3b 43 b5 70 E.....$i.t..;C.p 00:22:29.536 00000150 db c3 be 26 a3 fd 2a 20 f3 fa e1 8a e8 00 cd dd ...&..* ........ 00:22:29.536 00000160 29 1d db 6e da ce 7b 87 78 92 76 b8 ed 98 1b fd )..n..{.x.v..... 00:22:29.536 00000170 57 c2 0d 24 39 3d 1a 43 c6 9d c9 0e e6 c7 3e 43 W..$9=.C......>C 00:22:29.536 [2024-09-27 13:27:22.377182] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key0, hash=3, dhgroup=2, seq=3775755284, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.536 [2024-09-27 13:27:22.377560] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.536 [2024-09-27 13:27:22.385189] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.536 [2024-09-27 13:27:22.385613] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.536 [2024-09-27 13:27:22.385847] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.536 [2024-09-27 13:27:22.386077] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.536 [2024-09-27 13:27:22.483406] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.536 [2024-09-27 13:27:22.483713] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.536 [2024-09-27 13:27:22.483898] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 2 (ffdhe3072) 00:22:29.536 [2024-09-27 13:27:22.484050] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.536 [2024-09-27 13:27:22.484301] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.536 ctrlr pubkey: 00:22:29.536 00000000 f4 10 61 61 64 69 fb 37 56 82 5f c7 fa 38 06 fe ..aadi.7V._..8.. 00:22:29.536 00000010 1c 90 5e 7d 92 ba 94 db 46 f2 75 a8 38 e4 0f 8e ..^}....F.u.8... 00:22:29.536 00000020 e2 7f 6b a3 23 38 e9 b1 d4 1e 37 00 f3 9b 58 c3 ..k.#8....7...X. 00:22:29.536 00000030 ce f6 67 3f b2 04 63 64 b8 b4 51 55 8e de 32 c1 ..g?..cd..QU..2. 00:22:29.536 00000040 bd e4 29 03 20 63 5b 74 f8 d3 62 10 f3 84 d3 af ..). c[t..b..... 00:22:29.536 00000050 7c 1d 01 7f 23 5b 60 39 7d e5 4f 9c cf 9d b0 51 |...#[`9}.O....Q 00:22:29.536 00000060 b4 f3 be 7c c1 4b 49 a3 a0 fc dd 20 b8 a8 d2 d9 ...|.KI.... .... 00:22:29.536 00000070 6b 03 01 57 10 c0 19 04 02 33 d2 82 d8 23 04 ea k..W.....3...#.. 00:22:29.536 00000080 5c 6f d9 50 89 21 6d 19 3a 6d fa 19 6d b8 99 b9 \o.P.!m.:m..m... 00:22:29.536 00000090 31 96 74 2a cc e2 fc 0a 82 1c 9f 48 cc d8 2d 5f 1.t*.......H..-_ 00:22:29.536 000000a0 70 17 1b a7 35 6c 3e 9b 20 71 10 a5 47 4d a7 2e p...5l>. q..GM.. 00:22:29.536 000000b0 aa c1 23 a3 4c c5 79 d0 9e 61 23 7d 21 a3 ff 0a ..#.L.y..a#}!... 00:22:29.536 000000c0 f1 6a b6 1b bf dd 5c 1a c3 e4 44 f7 12 8f 02 da .j....\...D..... 00:22:29.536 000000d0 06 30 53 89 bd 36 6e d9 ac db f1 c5 28 0d 9f a1 .0S..6n.....(... 00:22:29.536 000000e0 33 18 4e 59 e1 92 a2 23 64 05 d0 c5 d0 61 db b0 3.NY...#d....a.. 00:22:29.536 000000f0 b3 0b c0 0a a5 56 86 4b 21 35 c5 66 56 d1 2e 5a .....V.K!5.fV..Z 00:22:29.536 00000100 bb e3 77 d3 d1 31 a9 ee 4b 0d 55 9c 67 e1 d1 c8 ..w..1..K.U.g... 00:22:29.536 00000110 ce fe 76 24 a4 47 3b d8 52 21 9d 73 ed 9e a3 bc ..v$.G;.R!.s.... 00:22:29.536 00000120 59 0a 0c 3c 29 42 da 5f 72 51 87 0b 8e 02 d6 fc Y..<)B._rQ...... 00:22:29.536 00000130 ba fd db db 81 d4 92 26 3d e0 ef 23 25 25 f4 66 .......&=..#%%.f 00:22:29.536 00000140 3a 7e 06 4c 01 f2 69 78 d5 c7 38 1f 2c 03 91 e0 :~.L..ix..8.,... 00:22:29.536 00000150 84 27 ea 4f e5 58 13 90 4f cc 3a 41 b9 4d bf 78 .'.O.X..O.:A.M.x 00:22:29.536 00000160 d6 d0 2a fe 38 4a 0f 5a 50 80 fa b8 fc a8 13 63 ..*.8J.ZP......c 00:22:29.536 00000170 d0 34 b8 ca a2 f7 54 22 c5 23 ef df c3 d5 9b e6 .4....T".#...... 00:22:29.536 host pubkey: 00:22:29.536 00000000 e1 aa 38 dd 8e eb 7e 87 b0 82 8a 4e ec aa f6 13 ..8...~....N.... 00:22:29.536 00000010 bf f3 ff fc 1e 66 b2 fd 77 b9 1a fb 12 89 21 f5 .....f..w.....!. 00:22:29.536 00000020 e4 7a 4b 46 43 7e 07 43 36 4a b0 6f 97 44 22 bb .zKFC~.C6J.o.D". 00:22:29.536 00000030 f6 e7 28 45 c6 6a d4 f3 5c 14 3c c9 03 46 00 c2 ..(E.j..\.<..F.. 00:22:29.536 00000040 47 5d f0 70 54 62 4b 0c a5 50 74 02 bd ad cc 34 G].pTbK..Pt....4 00:22:29.536 00000050 8c a5 29 8b b4 2a b3 c6 4b d0 67 67 97 bb 87 85 ..)..*..K.gg.... 00:22:29.536 00000060 46 cb 32 40 d4 9d 7d db ba 5d 0d 5b 2f 70 2e b4 F.2@..}..].[/p.. 00:22:29.536 00000070 fd 3c d5 c5 9b d3 3f 3b 21 d0 a0 b1 8a 0a 08 ee .<....?;!....... 00:22:29.536 00000080 51 ce ab 8b 75 97 45 c5 62 49 38 ea 83 b5 d4 43 Q...u.E.bI8....C 00:22:29.536 00000090 e2 c8 24 9c f3 6f d6 03 a6 b5 58 ae 83 43 78 c1 ..$..o....X..Cx. 00:22:29.536 000000a0 18 5b 3f ac b3 5c 64 7b f8 9a 29 bc 4b 9a a4 de .[?..\d{..).K... 00:22:29.536 000000b0 27 cc 16 83 ce fc f5 ae 45 fa 4a fb 7b 7c 57 3a '.......E.J.{|W: 00:22:29.536 000000c0 18 00 4b b7 ba 3b d5 e4 cd 7e 4a 30 9d b2 c5 bb ..K..;...~J0.... 00:22:29.536 000000d0 28 2b 90 2d 40 fa 0e 86 2f 17 40 9c a9 a5 ca 37 (+.-@.../.@....7 00:22:29.536 000000e0 2e ad 60 33 aa fb d9 f9 60 76 92 aa 5f d0 fc 24 ..`3....`v.._..$ 00:22:29.536 000000f0 94 14 e6 4b 13 8e ff 3a 21 19 a1 41 66 d5 a6 b6 ...K...:!..Af... 00:22:29.536 00000100 4b 3b 86 38 7b 17 2c 93 79 30 91 1c f4 78 ff db K;.8{.,.y0...x.. 00:22:29.536 00000110 03 12 73 d8 05 2a 3a ba 29 24 44 4b 50 74 71 c7 ..s..*:.)$DKPtq. 00:22:29.536 00000120 27 f7 be f1 53 ae dd 89 53 21 91 e7 e1 70 72 3f '...S...S!...pr? 00:22:29.536 00000130 31 0e de 9d b2 a6 9b 96 c5 5c 70 98 0a 86 c1 8a 1........\p..... 00:22:29.536 00000140 dd 94 10 d7 67 fa 1d a5 6b 0c 21 33 c6 ee 9b fd ....g...k.!3.... 00:22:29.536 00000150 ac c4 9b 9c 3f ef dd fb f5 17 92 3d bd f1 33 e8 ....?......=..3. 00:22:29.536 00000160 6e 55 2b a4 57 be 3f a9 33 1b 86 92 c3 22 79 8c nU+.W.?.3...."y. 00:22:29.537 00000170 90 7e b1 1f db fb 41 88 ae b9 90 fa 1a 46 52 61 .~....A......FRa 00:22:29.537 dh secret: 00:22:29.537 00000000 8e 2d 01 ca e6 76 2c 91 5d ed b2 52 13 7c 15 cf .-...v,.]..R.|.. 00:22:29.537 00000010 03 bc ba 0d 53 f6 d1 cf f6 5a 49 41 12 05 56 2a ....S....ZIA..V* 00:22:29.537 00000020 87 13 7c 5d 77 dc 21 b6 8b bc 29 4b 28 0f 09 12 ..|]w.!...)K(... 00:22:29.537 00000030 f9 a3 19 bb be 96 3d 79 a8 b1 ee 00 bc 02 fb d3 ......=y........ 00:22:29.537 00000040 2c d9 e7 f1 bc ce 26 88 51 52 c0 61 d9 b3 df d5 ,.....&.QR.a.... 00:22:29.537 00000050 41 f0 46 9e aa d9 26 ec eb 98 6f 50 6e 18 d4 68 A.F...&...oPn..h 00:22:29.537 00000060 59 c9 d8 c0 72 76 61 68 21 f7 83 e8 d7 26 18 24 Y...rvah!....&.$ 00:22:29.537 00000070 78 87 0d d3 5c 6f 37 3c f7 84 af f7 cc a7 c2 1c x...\o7<........ 00:22:29.537 00000080 64 35 41 89 6b 8d da b2 3d 8f 00 18 1f b5 17 06 d5A.k...=....... 00:22:29.537 00000090 a3 a5 ba af f1 bd 6d 62 77 46 9c 6b 44 26 13 6b ......mbwF.kD&.k 00:22:29.537 000000a0 9a ca 88 68 0a 05 89 34 e3 79 9d d6 61 74 98 f1 ...h...4.y..at.. 00:22:29.537 000000b0 d5 3e 2a c7 e9 56 0d f5 a3 f2 10 52 e7 47 b4 94 .>*..V.....R.G.. 00:22:29.537 000000c0 fd 88 b3 a6 85 2c 5b 64 4c 55 73 04 c7 14 45 17 .....,[dLUs...E. 00:22:29.537 000000d0 07 7e d0 3d 78 b2 4e b9 1c 4c bd 9d 7c 55 36 e7 .~.=x.N..L..|U6. 00:22:29.537 000000e0 e7 d9 3e e5 cb e1 f5 65 43 a1 18 27 ee 93 d2 73 ..>....eC..'...s 00:22:29.537 000000f0 7b bd bf f9 ed 27 c3 41 5d 17 30 32 98 2b a5 eb {....'.A].02.+.. 00:22:29.537 00000100 33 15 78 96 af da 26 72 46 90 9e e2 7e 11 f9 e7 3.x...&rF...~... 00:22:29.537 00000110 0b 1b 40 4c 74 44 7a de d9 6e 7a 90 f7 b8 be 3a ..@LtDz..nz....: 00:22:29.537 00000120 2b 30 6a 0c ef d7 56 60 bc a3 74 2f 54 79 7c ee +0j...V`..t/Ty|. 00:22:29.537 00000130 e7 61 a2 96 8b 9d 36 d2 f8 94 18 42 51 9c bb fa .a....6....BQ... 00:22:29.537 00000140 ed 82 ab f2 1a 35 a8 e3 aa ce 24 24 c2 e4 38 85 .....5....$$..8. 00:22:29.537 00000150 e6 88 36 a0 30 d7 00 15 a4 76 09 08 25 22 29 a4 ..6.0....v..%"). 00:22:29.537 00000160 91 de 9b ec b0 72 b5 f0 ba a6 e0 c3 a5 53 eb 02 .....r.......S.. 00:22:29.537 00000170 39 09 87 59 9b 5a 66 48 dc 13 e0 c7 4e 49 70 da 9..Y.ZfH....NIp. 00:22:29.537 [2024-09-27 13:27:22.499222] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=3, dhgroup=2, seq=3775755285, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.537 [2024-09-27 13:27:22.499530] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.537 [2024-09-27 13:27:22.507756] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.537 [2024-09-27 13:27:22.508050] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.537 [2024-09-27 13:27:22.508343] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.537 [2024-09-27 13:27:22.508529] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.537 [2024-09-27 13:27:22.560450] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.537 [2024-09-27 13:27:22.560640] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.537 [2024-09-27 13:27:22.560838] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.537 [2024-09-27 13:27:22.560967] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.537 [2024-09-27 13:27:22.561193] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.537 ctrlr pubkey: 00:22:29.537 00000000 f4 10 61 61 64 69 fb 37 56 82 5f c7 fa 38 06 fe ..aadi.7V._..8.. 00:22:29.537 00000010 1c 90 5e 7d 92 ba 94 db 46 f2 75 a8 38 e4 0f 8e ..^}....F.u.8... 00:22:29.537 00000020 e2 7f 6b a3 23 38 e9 b1 d4 1e 37 00 f3 9b 58 c3 ..k.#8....7...X. 00:22:29.537 00000030 ce f6 67 3f b2 04 63 64 b8 b4 51 55 8e de 32 c1 ..g?..cd..QU..2. 00:22:29.537 00000040 bd e4 29 03 20 63 5b 74 f8 d3 62 10 f3 84 d3 af ..). c[t..b..... 00:22:29.537 00000050 7c 1d 01 7f 23 5b 60 39 7d e5 4f 9c cf 9d b0 51 |...#[`9}.O....Q 00:22:29.537 00000060 b4 f3 be 7c c1 4b 49 a3 a0 fc dd 20 b8 a8 d2 d9 ...|.KI.... .... 00:22:29.537 00000070 6b 03 01 57 10 c0 19 04 02 33 d2 82 d8 23 04 ea k..W.....3...#.. 00:22:29.537 00000080 5c 6f d9 50 89 21 6d 19 3a 6d fa 19 6d b8 99 b9 \o.P.!m.:m..m... 00:22:29.537 00000090 31 96 74 2a cc e2 fc 0a 82 1c 9f 48 cc d8 2d 5f 1.t*.......H..-_ 00:22:29.537 000000a0 70 17 1b a7 35 6c 3e 9b 20 71 10 a5 47 4d a7 2e p...5l>. q..GM.. 00:22:29.537 000000b0 aa c1 23 a3 4c c5 79 d0 9e 61 23 7d 21 a3 ff 0a ..#.L.y..a#}!... 00:22:29.537 000000c0 f1 6a b6 1b bf dd 5c 1a c3 e4 44 f7 12 8f 02 da .j....\...D..... 00:22:29.537 000000d0 06 30 53 89 bd 36 6e d9 ac db f1 c5 28 0d 9f a1 .0S..6n.....(... 00:22:29.537 000000e0 33 18 4e 59 e1 92 a2 23 64 05 d0 c5 d0 61 db b0 3.NY...#d....a.. 00:22:29.537 000000f0 b3 0b c0 0a a5 56 86 4b 21 35 c5 66 56 d1 2e 5a .....V.K!5.fV..Z 00:22:29.537 00000100 bb e3 77 d3 d1 31 a9 ee 4b 0d 55 9c 67 e1 d1 c8 ..w..1..K.U.g... 00:22:29.537 00000110 ce fe 76 24 a4 47 3b d8 52 21 9d 73 ed 9e a3 bc ..v$.G;.R!.s.... 00:22:29.537 00000120 59 0a 0c 3c 29 42 da 5f 72 51 87 0b 8e 02 d6 fc Y..<)B._rQ...... 00:22:29.537 00000130 ba fd db db 81 d4 92 26 3d e0 ef 23 25 25 f4 66 .......&=..#%%.f 00:22:29.537 00000140 3a 7e 06 4c 01 f2 69 78 d5 c7 38 1f 2c 03 91 e0 :~.L..ix..8.,... 00:22:29.537 00000150 84 27 ea 4f e5 58 13 90 4f cc 3a 41 b9 4d bf 78 .'.O.X..O.:A.M.x 00:22:29.537 00000160 d6 d0 2a fe 38 4a 0f 5a 50 80 fa b8 fc a8 13 63 ..*.8J.ZP......c 00:22:29.537 00000170 d0 34 b8 ca a2 f7 54 22 c5 23 ef df c3 d5 9b e6 .4....T".#...... 00:22:29.537 host pubkey: 00:22:29.537 00000000 30 74 d2 aa ff 29 ba c5 be a4 24 3e 42 6b 86 a3 0t...)....$>Bk.. 00:22:29.537 00000010 a9 1e 5f 33 d5 f1 ac ac 7d f3 96 cd 72 21 e5 5e .._3....}...r!.^ 00:22:29.537 00000020 70 b1 ee e4 b7 e0 40 b3 ac 6b 03 3b b9 46 58 8b p.....@..k.;.FX. 00:22:29.537 00000030 62 07 84 f8 94 18 6a f7 b0 52 95 58 6d 53 25 be b.....j..R.XmS%. 00:22:29.537 00000040 1b e1 bf 25 58 f1 35 70 1a 10 34 e3 cc c9 d0 6a ...%X.5p..4....j 00:22:29.537 00000050 38 9f 85 30 07 20 b3 0b 74 40 bd cc ce 48 f7 a8 8..0. ..t@...H.. 00:22:29.537 00000060 62 ec d4 03 99 85 2d bc b2 1c 6c c9 38 48 ce f8 b.....-...l.8H.. 00:22:29.537 00000070 c7 a9 dd 00 9e 81 b1 7b bc 3a b9 85 70 c9 09 f8 .......{.:..p... 00:22:29.537 00000080 59 ed 5f 69 42 c0 e7 09 a5 92 46 21 c3 6f 1c d4 Y._iB.....F!.o.. 00:22:29.537 00000090 89 5a 12 c1 87 f4 00 20 45 20 93 48 c3 7f 74 70 .Z..... E .H..tp 00:22:29.537 000000a0 92 57 21 e9 0a 60 39 ed e2 73 1c 41 b2 a9 ec 2e .W!..`9..s.A.... 00:22:29.537 000000b0 af fe c4 f9 86 64 d6 bb 69 45 09 d5 88 3d 6f 41 .....d..iE...=oA 00:22:29.537 000000c0 95 48 65 85 f8 29 71 0e 4a e6 31 f6 d4 e3 b3 7e .He..)q.J.1....~ 00:22:29.537 000000d0 70 1a 81 a2 98 2e 6d 17 f7 b8 72 a6 bd 10 4f bd p.....m...r...O. 00:22:29.537 000000e0 5e f7 43 85 d3 32 37 69 59 02 b6 9e 34 d2 33 68 ^.C..27iY...4.3h 00:22:29.537 000000f0 d2 53 74 95 c6 b5 d6 ff be ef b1 d8 67 92 69 3f .St.........g.i? 00:22:29.537 00000100 90 d0 b3 d3 64 7f 73 85 0c 11 f5 8b a3 ae 33 86 ....d.s.......3. 00:22:29.537 00000110 2b 3a b9 ee 95 6e 15 c6 6a 7d ac 16 70 2d 48 7c +:...n..j}..p-H| 00:22:29.537 00000120 27 9e 53 b5 d3 54 2e 71 16 d0 ee 21 8d 11 18 74 '.S..T.q...!...t 00:22:29.537 00000130 09 ff 79 4b 35 a3 06 3d 06 20 f2 ce 46 b8 53 66 ..yK5..=. ..F.Sf 00:22:29.537 00000140 87 1a 8b 5a f2 f7 b2 73 03 ba 5d 8a 24 1d e0 cd ...Z...s..].$... 00:22:29.537 00000150 bd 57 1b bb 05 f7 46 e0 4b 04 8d 82 82 c7 1a 4c .W....F.K......L 00:22:29.537 00000160 93 75 4e 29 1f 19 ce f6 76 15 e4 c6 33 7f f9 fb .uN)....v...3... 00:22:29.537 00000170 99 54 8d 5e 7d 46 70 04 03 b1 fa cf e9 64 6a 63 .T.^}Fp......djc 00:22:29.537 dh secret: 00:22:29.537 00000000 6c 29 1b 4a 99 ed df 64 f3 d5 a5 6c 14 13 95 6d l).J...d...l...m 00:22:29.537 00000010 83 d5 a0 dd 9f b8 c2 26 bb 5e 87 54 f2 7e 13 23 .......&.^.T.~.# 00:22:29.537 00000020 23 75 c0 97 f6 c8 94 cc ee 00 72 58 6f bd 17 d4 #u........rXo... 00:22:29.537 00000030 a4 be 50 ce 5c ae 4b ff 6c 41 a3 a7 b9 8f e0 33 ..P.\.K.lA.....3 00:22:29.537 00000040 d4 fb 39 0b 27 50 5d 92 7a c5 0f 45 44 a3 5a bb ..9.'P].z..ED.Z. 00:22:29.537 00000050 e4 ba 2a 05 65 0b ad f6 11 5b aa b2 3e 54 d6 3b ..*.e....[..>T.; 00:22:29.537 00000060 cb 1a a7 01 a3 5b eb 1e 8f ad e4 90 f4 0b 70 74 .....[........pt 00:22:29.537 00000070 82 d4 7f 17 0d a9 a1 05 41 7c 75 4c 22 e0 00 bc ........A|uL"... 00:22:29.537 00000080 9f 08 c4 52 06 78 d7 3c 2b b7 97 b5 0d 7a e1 d1 ...R.x.<+....z.. 00:22:29.537 00000090 85 2d 1c c1 30 18 c1 ae df 15 8a 33 51 fb b8 73 .-..0......3Q..s 00:22:29.537 000000a0 96 97 06 70 e4 35 2f 9e b7 5e 2d ff 5e 67 dc 97 ...p.5/..^-.^g.. 00:22:29.537 000000b0 1c d7 08 89 75 10 2e f6 14 b6 e5 74 94 86 cb 79 ....u......t...y 00:22:29.537 000000c0 db a9 9f 6b ae 47 93 86 0b 83 d3 c1 13 e4 06 ab ...k.G.......... 00:22:29.537 000000d0 43 08 5a ec 0a 0a 91 f9 c6 2a 0f 1f 94 4d 5c 1f C.Z......*...M\. 00:22:29.537 000000e0 30 2f 86 aa 43 12 4d 43 fb 90 e7 db 5e 2d dd 0c 0/..C.MC....^-.. 00:22:29.537 000000f0 ea 10 4a f7 0a 9f c0 c7 6b 6f 4c 51 f3 52 8b e1 ..J.....koLQ.R.. 00:22:29.537 00000100 8f a0 a2 44 15 a5 89 59 63 31 d5 cb b0 3f ce b4 ...D...Yc1...?.. 00:22:29.537 00000110 3c 6a 48 f4 80 c6 7e 94 5a 37 e5 7b 5c 2d 90 e4 !J.. 00:22:29.537 00000030 c3 a8 f0 52 bc 3c 9b d2 74 f6 26 7f 4d 4f f2 f8 ...R.<..t.&.MO.. 00:22:29.537 00000040 a6 9a 21 20 b8 43 62 c7 77 82 2a 6d a0 90 04 81 ..! .Cb.w.*m.... 00:22:29.537 00000050 f9 55 ea bc 0a 1e 6d b0 0d 88 35 d9 01 71 8d 66 .U....m...5..q.f 00:22:29.537 00000060 a6 eb 28 9e 4a a8 bd 56 81 86 9a 7a 80 ee 77 85 ..(.J..V...z..w. 00:22:29.537 00000070 c2 b8 1b c0 89 92 fa 0b 98 42 5c 94 a5 1d 94 95 .........B\..... 00:22:29.538 00000080 61 ce 1e b8 1c b7 08 65 b5 8e a3 b4 dd c5 12 16 a......e........ 00:22:29.538 00000090 e1 ff f5 d7 52 78 8e 9a 55 81 8f 4e 7d d3 42 03 ....Rx..U..N}.B. 00:22:29.538 000000a0 0e ca 63 54 ee da b8 99 0e 59 7b 96 5b da 10 aa ..cT.....Y{.[... 00:22:29.538 000000b0 a6 e8 0b 34 2a 75 01 33 43 92 b4 d8 90 4f ef 3e ...4*u.3C....O.> 00:22:29.538 000000c0 6f fb 7c 64 a4 09 a1 81 59 19 ee d8 5d 9a 5e ec o.|d....Y...].^. 00:22:29.538 000000d0 68 5f 50 20 39 f1 6b 30 64 dc 13 1e 10 86 00 ca h_P 9.k0d....... 00:22:29.538 000000e0 8d 1f 36 47 38 9d 5f 58 45 e0 9d 27 93 b8 f0 1d ..6G8._XE..'.... 00:22:29.538 000000f0 e6 fb 61 ce e4 92 4c 0c b7 c0 bc c7 f3 3e 74 c5 ..a...L......>t. 00:22:29.538 00000100 25 cb f6 d5 63 6c 2c 38 cd a4 4a 71 ab 40 68 7b %...cl,8..Jq.@h{ 00:22:29.538 00000110 42 df 8d 62 77 10 12 10 0d c5 c2 74 5e 0f 5d 9c B..bw......t^.]. 00:22:29.538 00000120 81 8f 16 87 db 15 43 ea a6 80 ed 6a da 4f fa 59 ......C....j.O.Y 00:22:29.538 00000130 12 be 07 2b 9e 2d 15 77 7a 91 82 6c 88 87 90 dd ...+.-.wz..l.... 00:22:29.538 00000140 2b d0 44 a4 98 51 a6 cc bd 3e df 78 c9 d1 96 9c +.D..Q...>.x.... 00:22:29.538 00000150 30 7e f2 49 a3 6c 3f d3 d6 de 14 53 c0 d7 d0 96 0~.I.l?....S.... 00:22:29.538 00000160 df f3 c9 76 02 fa 5e 13 69 c8 ef db 7c a4 6e 22 ...v..^.i...|.n" 00:22:29.538 00000170 8d 6a c2 ca d1 1b 51 d2 dd 8e bc d1 25 87 e1 3d .j....Q.....%..= 00:22:29.538 host pubkey: 00:22:29.538 00000000 a6 c1 d9 34 02 af 2d 75 f4 28 ea 61 b2 1c 88 d4 ...4..-u.(.a.... 00:22:29.538 00000010 c3 04 01 cd f7 b7 57 93 83 33 1d 8a fb dc c8 0b ......W..3...... 00:22:29.538 00000020 25 8c dd e3 14 9b 4d 07 01 23 fd f9 f8 c9 a1 26 %.....M..#.....& 00:22:29.538 00000030 7e 63 67 dd d7 c5 cb 05 c8 de 33 d1 61 fa b8 dd ~cg.......3.a... 00:22:29.538 00000040 55 50 c3 44 8b c1 de 32 a2 76 bd a8 a0 00 18 1c UP.D...2.v...... 00:22:29.538 00000050 73 3d 5a 2a 1d 4f 1d 22 ae 21 0e 9c e8 bb 6c 6b s=Z*.O.".!....lk 00:22:29.538 00000060 b8 44 a8 6c a3 59 38 cc 9d 3d b8 69 21 10 b8 ac .D.l.Y8..=.i!... 00:22:29.538 00000070 ee 98 7c 77 d1 b6 9e 76 8c 89 fe 0e a1 0c 47 67 ..|w...v......Gg 00:22:29.538 00000080 ef 42 52 ce 67 1e 86 88 f6 59 07 e1 fe 9c cd c0 .BR.g....Y...... 00:22:29.538 00000090 88 82 6c 9a d6 a6 7f 31 65 21 a3 a3 2a 29 79 93 ..l....1e!..*)y. 00:22:29.538 000000a0 1a a2 1c 48 e2 5c 74 5a c5 3a 5c 55 14 88 c7 b4 ...H.\tZ.:\U.... 00:22:29.538 000000b0 99 52 5b ad d8 6c 27 c7 5b 1b 97 4a 61 c6 d8 c3 .R[..l'.[..Ja... 00:22:29.538 000000c0 b3 5e 97 34 72 4b 5b a3 38 b2 a7 eb 32 cc ab 45 .^.4rK[.8...2..E 00:22:29.538 000000d0 da b1 ec c0 80 29 af 41 b2 f9 fd 27 d3 fb e2 30 .....).A...'...0 00:22:29.538 000000e0 75 ee 4d 9f 22 bd af 3a 02 c3 a1 a3 20 92 24 95 u.M."..:.... .$. 00:22:29.538 000000f0 f9 3e 07 1f bf 63 bf ef ee 86 62 db b3 a6 68 69 .>...c....b...hi 00:22:29.538 00000100 33 c0 54 1a 2d 33 ee 2f ff f2 1b 41 81 c7 a4 77 3.T.-3./...A...w 00:22:29.538 00000110 0e 24 90 d2 97 39 ed a7 e2 4f 86 0c b1 a7 6a 8a .$...9...O....j. 00:22:29.538 00000120 b8 75 1a 5c d0 69 78 6f 2e 15 c8 c7 1b f9 f0 09 .u.\.ixo........ 00:22:29.538 00000130 34 90 81 03 62 64 a5 83 4c 09 5f 59 93 00 19 4a 4...bd..L._Y...J 00:22:29.538 00000140 d1 43 3d aa 65 41 0a 24 1e df dd 8b 27 68 5a a9 .C=.eA.$....'hZ. 00:22:29.538 00000150 89 e7 9b bc 29 3f f2 ad eb 38 3c d2 83 39 4e 3f ....)?...8<..9N? 00:22:29.538 00000160 60 fe cb 79 19 56 b9 bb ae 2d be 28 1a 77 e2 ae `..y.V...-.(.w.. 00:22:29.538 00000170 29 4f 5d 2f f3 1d 08 cf d6 41 b1 cb 09 18 7c 5c )O]/.....A....|\ 00:22:29.538 dh secret: 00:22:29.538 00000000 85 2c 9b 1d ed c0 7a f3 fa 9b 3d f7 47 25 1f f4 .,....z...=.G%.. 00:22:29.538 00000010 e1 9b 0f f5 15 fd 9f a1 2c 58 ee c3 77 9c 8a c2 ........,X..w... 00:22:29.538 00000020 56 a3 ac d7 66 82 90 70 19 60 17 a7 a8 ad 1c 90 V...f..p.`...... 00:22:29.538 00000030 d7 5d 4a 89 6c a7 96 71 31 cf bb fb b0 64 10 2f .]J.l..q1....d./ 00:22:29.538 00000040 d4 23 23 21 76 06 c0 b4 10 9a 54 74 13 08 61 ba .##!v.....Tt..a. 00:22:29.538 00000050 32 63 2d c6 7b bf 68 ac dd 75 87 21 8d 1d b7 cd 2c-.{.h..u.!.... 00:22:29.538 00000060 83 09 d4 80 db 3d 96 c3 9b 71 c3 e0 22 55 30 2c .....=...q.."U0, 00:22:29.538 00000070 dd e9 fa ce 03 4f 3c ed e2 60 14 7f ce ed ee e2 .....O<..`...... 00:22:29.538 00000080 d8 94 90 28 92 0c 49 b7 ef 4e ea cb 32 c0 36 18 ...(..I..N..2.6. 00:22:29.538 00000090 df 5d 09 f7 7e 36 ee d7 aa 1c b3 81 51 a4 cc f2 .]..~6......Q... 00:22:29.538 000000a0 cf ec 87 95 73 30 c6 09 d0 4e c5 77 0a f3 73 93 ....s0...N.w..s. 00:22:29.538 000000b0 b8 36 fc 38 d4 cb 38 b6 d9 21 cf b0 27 7a ec d4 .6.8..8..!..'z.. 00:22:29.538 000000c0 c4 10 c9 8d bc ec 56 17 3e ca 85 16 de c7 33 76 ......V.>.....3v 00:22:29.538 000000d0 48 bf c2 91 1b fe 0a d1 dc c3 b5 a9 37 8e 35 bb H...........7.5. 00:22:29.538 000000e0 55 a4 e8 a8 b5 ce 2a b8 5f 9b 10 cb 50 70 d9 c2 U.....*._...Pp.. 00:22:29.538 000000f0 5d c3 4c 96 c4 e4 bb a6 75 ea f5 0c 57 55 2e 35 ].L.....u...WU.5 00:22:29.538 00000100 24 9d ee 45 24 88 c7 a0 2f 71 61 22 db 1e cb 79 $..E$.../qa"...y 00:22:29.538 00000110 c7 f1 a3 58 a6 01 11 5c ca f0 28 69 2e 3e d7 8a ...X...\..(i.>.. 00:22:29.538 00000120 14 e5 ca cc 41 db ae 02 6b 92 71 f9 3b b8 03 28 ....A...k.q.;..( 00:22:29.538 00000130 34 f1 e2 09 0f 86 f2 e9 57 0c 50 82 29 b6 16 f1 4.......W.P.)... 00:22:29.538 00000140 46 7b 24 9e 67 81 1b 6e e5 7f f2 e8 86 26 cd 6d F{$.g..n.....&.m 00:22:29.538 00000150 28 84 db 72 70 fa 4c 24 24 a5 9e 75 15 65 e1 f6 (..rp.L$$..u.e.. 00:22:29.538 00000160 59 c1 f7 8a ce 0b 50 e9 52 ec 9b 28 dd b3 80 1f Y.....P.R..(.... 00:22:29.538 00000170 c7 4f 13 80 c9 0d 2f da d7 68 d4 21 95 9c 18 57 .O..../..h.!...W 00:22:29.538 [2024-09-27 13:27:22.711108] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=3, dhgroup=2, seq=3775755287, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.538 [2024-09-27 13:27:22.711497] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.538 [2024-09-27 13:27:22.719124] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.538 [2024-09-27 13:27:22.719428] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.538 [2024-09-27 13:27:22.719527] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.538 [2024-09-27 13:27:22.719751] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.538 [2024-09-27 13:27:22.771182] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.538 [2024-09-27 13:27:22.771399] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.538 [2024-09-27 13:27:22.771620] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.538 [2024-09-27 13:27:22.771712] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.538 [2024-09-27 13:27:22.771941] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.538 ctrlr pubkey: 00:22:29.538 00000000 67 3c 4f 17 d0 95 ac 26 c5 04 4b e5 d5 a0 7a 1c g!J.. 00:22:29.538 00000030 c3 a8 f0 52 bc 3c 9b d2 74 f6 26 7f 4d 4f f2 f8 ...R.<..t.&.MO.. 00:22:29.538 00000040 a6 9a 21 20 b8 43 62 c7 77 82 2a 6d a0 90 04 81 ..! .Cb.w.*m.... 00:22:29.538 00000050 f9 55 ea bc 0a 1e 6d b0 0d 88 35 d9 01 71 8d 66 .U....m...5..q.f 00:22:29.538 00000060 a6 eb 28 9e 4a a8 bd 56 81 86 9a 7a 80 ee 77 85 ..(.J..V...z..w. 00:22:29.538 00000070 c2 b8 1b c0 89 92 fa 0b 98 42 5c 94 a5 1d 94 95 .........B\..... 00:22:29.538 00000080 61 ce 1e b8 1c b7 08 65 b5 8e a3 b4 dd c5 12 16 a......e........ 00:22:29.538 00000090 e1 ff f5 d7 52 78 8e 9a 55 81 8f 4e 7d d3 42 03 ....Rx..U..N}.B. 00:22:29.538 000000a0 0e ca 63 54 ee da b8 99 0e 59 7b 96 5b da 10 aa ..cT.....Y{.[... 00:22:29.538 000000b0 a6 e8 0b 34 2a 75 01 33 43 92 b4 d8 90 4f ef 3e ...4*u.3C....O.> 00:22:29.538 000000c0 6f fb 7c 64 a4 09 a1 81 59 19 ee d8 5d 9a 5e ec o.|d....Y...].^. 00:22:29.538 000000d0 68 5f 50 20 39 f1 6b 30 64 dc 13 1e 10 86 00 ca h_P 9.k0d....... 00:22:29.538 000000e0 8d 1f 36 47 38 9d 5f 58 45 e0 9d 27 93 b8 f0 1d ..6G8._XE..'.... 00:22:29.538 000000f0 e6 fb 61 ce e4 92 4c 0c b7 c0 bc c7 f3 3e 74 c5 ..a...L......>t. 00:22:29.538 00000100 25 cb f6 d5 63 6c 2c 38 cd a4 4a 71 ab 40 68 7b %...cl,8..Jq.@h{ 00:22:29.538 00000110 42 df 8d 62 77 10 12 10 0d c5 c2 74 5e 0f 5d 9c B..bw......t^.]. 00:22:29.538 00000120 81 8f 16 87 db 15 43 ea a6 80 ed 6a da 4f fa 59 ......C....j.O.Y 00:22:29.538 00000130 12 be 07 2b 9e 2d 15 77 7a 91 82 6c 88 87 90 dd ...+.-.wz..l.... 00:22:29.538 00000140 2b d0 44 a4 98 51 a6 cc bd 3e df 78 c9 d1 96 9c +.D..Q...>.x.... 00:22:29.538 00000150 30 7e f2 49 a3 6c 3f d3 d6 de 14 53 c0 d7 d0 96 0~.I.l?....S.... 00:22:29.538 00000160 df f3 c9 76 02 fa 5e 13 69 c8 ef db 7c a4 6e 22 ...v..^.i...|.n" 00:22:29.538 00000170 8d 6a c2 ca d1 1b 51 d2 dd 8e bc d1 25 87 e1 3d .j....Q.....%..= 00:22:29.538 host pubkey: 00:22:29.538 00000000 10 45 d0 f7 09 0e 1f 5d e4 42 e1 3c 63 1f 9c 86 .E.....].B...."...... 00:22:29.538 00000080 9f f5 e6 5e f1 45 05 d6 0b 5c 7f 72 6a 82 31 e3 ...^.E...\.rj.1. 00:22:29.538 00000090 b9 41 80 fb c5 36 9c 89 82 f3 3a d2 13 2d 9f cd .A...6....:..-.. 00:22:29.538 000000a0 e6 e5 56 cb 34 67 da 34 a0 da 8b 0d 56 9b 0b 29 ..V.4g.4....V..) 00:22:29.538 000000b0 b2 b9 c8 dc e7 4c 7e 0c dd ab 02 a1 22 c7 70 7a .....L~.....".pz 00:22:29.538 000000c0 b9 27 2d 3c 52 f5 a6 29 36 da af 5f 3f 74 5c c7 .'-....2.....K.(!. 00:22:29.538 00000040 28 3e 83 13 dd a6 51 cc 65 16 57 52 95 27 a9 49 (>....Q.e.WR.'.I 00:22:29.538 00000050 a6 4e f6 c7 b9 0a 7a 0c 54 e2 4f e1 f8 02 32 eb .N....z.T.O...2. 00:22:29.538 00000060 0f 40 e8 27 b8 ce 1b 4b 4e 5f 7c be c7 3a 09 d4 .@.'...KN_|..:.. 00:22:29.538 00000070 01 34 70 fc 5b d2 05 bf b1 28 5a 1c 7c 81 58 77 .4p.[....(Z.|.Xw 00:22:29.538 00000080 96 53 62 13 02 20 67 56 9d 00 85 92 9f 21 e5 31 .Sb.. gV.....!.1 00:22:29.538 00000090 97 f5 48 72 fb f8 3a 50 c9 e9 da 6a ed cc cb 49 ..Hr..:P...j...I 00:22:29.538 000000a0 35 d8 d8 03 e8 f6 16 da 3f 92 0a dd 68 30 5b 5a 5.......?...h0[Z 00:22:29.538 000000b0 c7 bb 1f 53 c7 f3 f0 fe 3d 07 2f f4 bc 1a 13 f3 ...S....=./..... 00:22:29.538 000000c0 13 6d ed 00 d1 82 56 6b 64 00 d5 5a 93 d0 28 cf .m....Vkd..Z..(. 00:22:29.538 000000d0 47 4c 45 4b 1f 41 0c 20 bf dd 82 b7 59 b8 3e 1d GLEK.A. ....Y.>. 00:22:29.538 000000e0 6e e9 5e 28 ae 04 ec a4 13 84 87 5e f7 90 4e 3b n.^(.......^..N; 00:22:29.538 000000f0 95 5e 77 35 8b 2a ee 68 c4 a8 cd 83 b6 aa c1 80 .^w5.*.h........ 00:22:29.538 00000100 64 2c 44 81 0d 41 a2 83 87 62 85 b1 ad 3a e9 44 d,D..A...b...:.D 00:22:29.538 00000110 19 86 6d 78 71 99 86 cc 2a 21 7c f5 65 39 27 4a ..mxq...*!|.e9'J 00:22:29.538 00000120 af 6f 00 88 ac e8 c6 ef 03 2d b9 91 de cf 60 90 .o.......-....`. 00:22:29.538 00000130 87 ab 54 ac 6d fa 0c 2d b4 91 c7 81 64 eb 36 b0 ..T.m..-....d.6. 00:22:29.538 00000140 42 64 2e ab 7d bd 27 a1 98 4b a0 a3 98 1a 8d 21 Bd..}.'..K.....! 00:22:29.538 00000150 86 a1 bf 54 27 6b 47 b9 5c 7b 32 ae 81 1e 95 16 ...T'kG.\{2..... 00:22:29.538 00000160 89 d0 b0 9d ad 2a 10 58 44 3f 14 47 b3 8a 7e 3f .....*.XD?.G..~? 00:22:29.538 00000170 db 17 ed 26 58 80 75 7a c1 67 8b 56 22 aa 65 b8 ...&X.uz.g.V".e. 00:22:29.538 [2024-09-27 13:27:22.784866] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key2, hash=3, dhgroup=2, seq=3775755288, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.538 [2024-09-27 13:27:22.785115] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.538 [2024-09-27 13:27:22.793170] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.539 [2024-09-27 13:27:22.793482] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.539 [2024-09-27 13:27:22.793715] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.539 [2024-09-27 13:27:22.793966] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.539 [2024-09-27 13:27:22.899096] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.539 [2024-09-27 13:27:22.899405] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.539 [2024-09-27 13:27:22.899513] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 2 (ffdhe3072) 00:22:29.539 [2024-09-27 13:27:22.899594] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.539 [2024-09-27 13:27:22.899886] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.539 ctrlr pubkey: 00:22:29.539 00000000 d3 11 af 31 7b 74 87 4b 69 41 e5 80 7d 4c 06 7f ...1{t.KiA..}L.. 00:22:29.539 00000010 41 11 72 e9 ae fe 9c 38 88 87 fd b4 6c 8e 1f 96 A.r....8....l... 00:22:29.539 00000020 5b 42 4a b1 fb 4a 69 51 e8 5d a4 48 ae d4 10 fa [BJ..JiQ.].H.... 00:22:29.539 00000030 0c 44 a1 a3 cc 82 91 d2 32 24 fe a6 30 a6 da 8a .D......2$..0... 00:22:29.539 00000040 47 6e 56 8a e9 71 eb e8 e6 be c7 ce cd 4e 28 8a GnV..q.......N(. 00:22:29.539 00000050 89 c1 1a 58 55 28 39 89 b0 cf 89 08 4b 68 47 7e ...XU(9.....KhG~ 00:22:29.539 00000060 0f b7 48 21 2a b4 2c 95 82 25 a5 22 62 d4 dd 33 ..H!*.,..%."b..3 00:22:29.539 00000070 2d c9 5d ae eb dc 19 56 e6 47 08 d8 19 38 81 a4 -.]....V.G...8.. 00:22:29.539 00000080 29 0d b4 b2 a2 fe 00 90 19 55 dd c8 3d 0a 21 42 )........U..=.!B 00:22:29.539 00000090 2a 32 66 27 b1 eb 0e 76 c6 02 cf cd f2 40 f6 1a *2f'...v.....@.. 00:22:29.539 000000a0 59 d1 24 1e 0d 05 96 2a 01 82 e4 d5 d2 a7 86 0a Y.$....*........ 00:22:29.539 000000b0 4f 4f 84 71 45 cb d2 25 41 fa 05 da 4b 41 45 a8 OO.qE..%A...KAE. 00:22:29.539 000000c0 0b e1 9e 6d 9d 89 d8 ed e9 b1 79 2a d6 9e 22 19 ...m......y*..". 00:22:29.539 000000d0 35 1f 39 0a b9 1f ab 6a 6e 27 2f 95 90 4d 1d c0 5.9....jn'/..M.. 00:22:29.539 000000e0 0c 74 b2 9f e2 18 26 a1 42 7b 76 a0 7e db 2f 31 .t....&.B{v.~./1 00:22:29.539 000000f0 fe 44 c2 68 34 e3 de bc 15 e1 61 8f 11 8d 31 a1 .D.h4.....a...1. 00:22:29.539 00000100 8c 42 c3 70 98 01 aa b5 c9 f7 40 dc 87 4e 63 b4 .B.p......@..Nc. 00:22:29.539 00000110 fc 87 53 c3 27 44 3b 87 ce ac 8d 16 59 71 0f 78 ..S.'D;.....Yq.x 00:22:29.539 00000120 78 58 56 10 d7 10 e2 c3 53 25 69 e2 d0 55 70 99 xXV.....S%i..Up. 00:22:29.539 00000130 75 d9 5d 33 d1 13 b0 3a f6 e0 64 73 a6 87 b2 e6 u.]3...:..ds.... 00:22:29.539 00000140 5a 2e d2 39 48 96 56 c4 10 8d 8e 16 69 3d 11 37 Z..9H.V.....i=.7 00:22:29.539 00000150 9f f3 b2 84 f1 c1 64 a7 31 00 7d f5 b2 ca a2 6d ......d.1.}....m 00:22:29.539 00000160 d0 bf 62 0e 25 96 ae 56 ac a9 14 38 cc f9 49 1d ..b.%..V...8..I. 00:22:29.539 00000170 d6 7c 32 4b 57 3c 74 b9 82 88 05 ac ad 0c 73 4d .|2KW... 00:22:29.539 00000040 1d 5c 91 41 34 73 3a 0c da 77 98 97 80 6e 55 cd .\.A4s:..w...nU. 00:22:29.539 00000050 3b 2b 1f e0 7e 12 e5 1b 8a df 2d f3 4e cf b9 a1 ;+..~.....-.N... 00:22:29.539 00000060 05 04 ae 60 bc e6 9c f1 fb 5e e1 ea 4b ce 0b 5e ...`.....^..K..^ 00:22:29.539 00000070 fc 11 0b 14 d3 9e 37 70 00 97 45 81 31 fb 31 d7 ......7p..E.1.1. 00:22:29.539 00000080 08 4b 22 c8 2c 41 c7 41 55 5c 14 87 7e df 08 7b .K".,A.AU\..~..{ 00:22:29.539 00000090 0d 1b bf fb aa 3c 97 18 42 98 7b e5 7a da de a7 .....<..B.{.z... 00:22:29.539 000000a0 1a 68 c4 33 b2 b4 d6 b1 39 1f bd 8a ca a9 05 b6 .h.3....9....... 00:22:29.539 000000b0 2c 21 ac 53 2f 01 1c 5e a9 c1 87 97 c8 02 43 95 ,!.S/..^......C. 00:22:29.539 000000c0 ff 90 99 29 cd 83 41 00 1f 4f 29 af e4 50 61 40 ...)..A..O)..Pa@ 00:22:29.539 000000d0 3c 4b 18 88 dd b1 91 38 76 15 1b 74 8c fb 93 07 77z.. 00:22:29.539 000000f0 ad b7 8f dc d8 40 e5 e4 44 0f 4f 74 d3 f3 ee ed .....@..D.Ot.... 00:22:29.539 00000100 d3 79 18 4f f7 47 d4 40 57 cf e7 ee c2 c2 a1 de .y.O.G.@W....... 00:22:29.539 00000110 d0 c8 bc a1 d1 2f 79 87 86 94 a5 da a1 69 36 89 ...../y......i6. 00:22:29.539 00000120 82 5d 16 7f 1d 0e 53 14 97 65 0f 5a ef 91 8c 6a .]....S..e.Z...j 00:22:29.539 00000130 0a 40 60 16 83 52 23 f7 b8 d0 07 c7 92 8b b3 d0 .@`..R#......... 00:22:29.539 00000140 93 1c 8f 30 ee b0 a3 2b c2 70 68 95 9e 15 bd e1 ...0...+.ph..... 00:22:29.539 00000150 81 3e 1c a8 c5 31 9a db 19 0a c2 fc 30 50 1c 12 .>...1......0P.. 00:22:29.539 00000160 b1 43 a5 44 99 01 3c 27 ab 6c 7f d6 15 bf 12 67 .C.D..<'.l.....g 00:22:29.539 00000170 0c 4a f4 03 26 4e c4 0d 95 e9 66 89 44 b9 b6 c0 .J..&N....f.D... 00:22:29.539 [2024-09-27 13:27:22.912324] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key3, hash=3, dhgroup=2, seq=3775755289, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.539 [2024-09-27 13:27:22.912600] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.539 [2024-09-27 13:27:22.920356] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.539 [2024-09-27 13:27:22.920584] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.539 [2024-09-27 13:27:22.920715] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.539 [2024-09-27 13:27:22.920913] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.539 [2024-09-27 13:27:22.973046] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.539 [2024-09-27 13:27:22.973358] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.539 [2024-09-27 13:27:22.973437] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.539 [2024-09-27 13:27:22.973557] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.539 [2024-09-27 13:27:22.973823] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.539 ctrlr pubkey: 00:22:29.539 00000000 d3 11 af 31 7b 74 87 4b 69 41 e5 80 7d 4c 06 7f ...1{t.KiA..}L.. 00:22:29.539 00000010 41 11 72 e9 ae fe 9c 38 88 87 fd b4 6c 8e 1f 96 A.r....8....l... 00:22:29.539 00000020 5b 42 4a b1 fb 4a 69 51 e8 5d a4 48 ae d4 10 fa [BJ..JiQ.].H.... 00:22:29.539 00000030 0c 44 a1 a3 cc 82 91 d2 32 24 fe a6 30 a6 da 8a .D......2$..0... 00:22:29.539 00000040 47 6e 56 8a e9 71 eb e8 e6 be c7 ce cd 4e 28 8a GnV..q.......N(. 00:22:29.539 00000050 89 c1 1a 58 55 28 39 89 b0 cf 89 08 4b 68 47 7e ...XU(9.....KhG~ 00:22:29.539 00000060 0f b7 48 21 2a b4 2c 95 82 25 a5 22 62 d4 dd 33 ..H!*.,..%."b..3 00:22:29.539 00000070 2d c9 5d ae eb dc 19 56 e6 47 08 d8 19 38 81 a4 -.]....V.G...8.. 00:22:29.539 00000080 29 0d b4 b2 a2 fe 00 90 19 55 dd c8 3d 0a 21 42 )........U..=.!B 00:22:29.539 00000090 2a 32 66 27 b1 eb 0e 76 c6 02 cf cd f2 40 f6 1a *2f'...v.....@.. 00:22:29.539 000000a0 59 d1 24 1e 0d 05 96 2a 01 82 e4 d5 d2 a7 86 0a Y.$....*........ 00:22:29.539 000000b0 4f 4f 84 71 45 cb d2 25 41 fa 05 da 4b 41 45 a8 OO.qE..%A...KAE. 00:22:29.539 000000c0 0b e1 9e 6d 9d 89 d8 ed e9 b1 79 2a d6 9e 22 19 ...m......y*..". 00:22:29.539 000000d0 35 1f 39 0a b9 1f ab 6a 6e 27 2f 95 90 4d 1d c0 5.9....jn'/..M.. 00:22:29.539 000000e0 0c 74 b2 9f e2 18 26 a1 42 7b 76 a0 7e db 2f 31 .t....&.B{v.~./1 00:22:29.539 000000f0 fe 44 c2 68 34 e3 de bc 15 e1 61 8f 11 8d 31 a1 .D.h4.....a...1. 00:22:29.539 00000100 8c 42 c3 70 98 01 aa b5 c9 f7 40 dc 87 4e 63 b4 .B.p......@..Nc. 00:22:29.539 00000110 fc 87 53 c3 27 44 3b 87 ce ac 8d 16 59 71 0f 78 ..S.'D;.....Yq.x 00:22:29.539 00000120 78 58 56 10 d7 10 e2 c3 53 25 69 e2 d0 55 70 99 xXV.....S%i..Up. 00:22:29.539 00000130 75 d9 5d 33 d1 13 b0 3a f6 e0 64 73 a6 87 b2 e6 u.]3...:..ds.... 00:22:29.539 00000140 5a 2e d2 39 48 96 56 c4 10 8d 8e 16 69 3d 11 37 Z..9H.V.....i=.7 00:22:29.539 00000150 9f f3 b2 84 f1 c1 64 a7 31 00 7d f5 b2 ca a2 6d ......d.1.}....m 00:22:29.539 00000160 d0 bf 62 0e 25 96 ae 56 ac a9 14 38 cc f9 49 1d ..b.%..V...8..I. 00:22:29.539 00000170 d6 7c 32 4b 57 3c 74 b9 82 88 05 ac ad 0c 73 4d .|2KW(. 00:22:29.539 00000120 5a cc 98 50 46 1b ff 82 7f d0 40 50 63 d2 28 83 Z..PF.....@Pc.(. 00:22:29.539 00000130 07 6c f9 72 73 9d 56 b8 f2 18 fb 6f a4 2b 2f c0 .l.rs.V....o.+/. 00:22:29.539 00000140 23 8c fa aa 28 3e 44 10 1b e0 4b c2 47 f1 21 06 #...(>D...K.G.!. 00:22:29.539 00000150 16 1f 7d 67 00 45 1f 3f 16 fe 82 f5 da 74 4c 7f ..}g.E.?.....tL. 00:22:29.539 00000160 47 37 57 b8 33 62 f6 64 47 91 cf 77 5c 38 75 f1 G7W.3b.dG..w\8u. 00:22:29.539 00000170 26 ad b1 77 de e9 9f d7 6a d1 52 16 19 b7 ef b2 &..w....j.R..... 00:22:29.539 dh secret: 00:22:29.539 00000000 f8 ca 0f 1d 4a cc fe 71 7d 71 f6 de 28 7e 13 99 ....J..q}q..(~.. 00:22:29.539 00000010 f7 e6 66 4a bb c0 7c 60 53 57 1e 3f 71 83 63 5d ..fJ..|`SW.?q.c] 00:22:29.539 00000020 54 a3 ce b0 21 95 3c e7 8c 86 83 41 db e4 2a 8f T...!.<....A..*. 00:22:29.540 00000030 c0 f8 de 60 fb 77 fc 66 b4 25 38 c6 97 31 7a ff ...`.w.f.%8..1z. 00:22:29.540 00000040 b4 42 74 ed fa 48 df 86 c4 f2 2a 10 12 37 b5 71 .Bt..H....*..7.q 00:22:29.540 00000050 87 a6 38 8b 17 d7 87 bc 0c 74 85 19 6b 00 7b 17 ..8......t..k.{. 00:22:29.540 00000060 bc 2d 18 47 49 1d d9 59 b0 31 fb 09 99 f6 7d 95 .-.GI..Y.1....}. 00:22:29.540 00000070 4f a8 fa 75 85 df 27 ab 36 61 c1 14 8e 7c 96 39 O..u..'.6a...|.9 00:22:29.540 00000080 76 c3 f8 de ba 3b 1f d2 de 47 5b b2 4a 2c 46 5e v....;...G[.J,F^ 00:22:29.540 00000090 52 b3 9c 15 ef 49 bd 3b 32 ad 9b de a0 39 b2 be R....I.;2....9.. 00:22:29.540 000000a0 57 64 9a 0e db f9 f1 f8 cf 36 a9 30 d2 e9 ff 8a Wd.......6.0.... 00:22:29.540 000000b0 18 ad 9b ef 23 cc 95 3c 7b 89 29 af 3e d3 83 e6 ....#..<{.).>... 00:22:29.540 000000c0 65 ff 9f 49 cd 91 cc 57 e0 d2 07 53 11 c8 c5 71 e..I...W...S...q 00:22:29.540 000000d0 dc cc 4f 07 4f 83 a6 56 98 ef f3 70 d4 21 7f 29 ..O.O..V...p.!.) 00:22:29.540 000000e0 04 87 6c 16 fe e4 3f 2e a6 89 bb 2c 36 b3 2d da ..l...?....,6.-. 00:22:29.540 000000f0 9f 53 99 1b 18 3d 24 e5 5f 26 52 e1 09 3e a0 ae .S...=$._&R..>.. 00:22:29.540 00000100 95 0a 2a cb c5 c6 06 a3 b4 bb 2b 4a aa d3 35 20 ..*.......+J..5 00:22:29.540 00000110 77 0f ec ef 35 44 8d 2b b2 9b d0 2e c2 92 40 7a w...5D.+......@z 00:22:29.540 00000120 70 52 40 a2 9a 26 47 99 ed 08 15 bb 21 69 79 1d pR@..&G.....!iy. 00:22:29.540 00000130 4c 5e b6 46 4d cd f0 64 6e 4f 2e 10 56 71 d7 6a L^.FM..dnO..Vq.j 00:22:29.540 00000140 b8 47 27 13 90 ad da 49 6d 57 60 0c d4 fc 38 6f .G'....ImW`...8o 00:22:29.540 00000150 16 da 47 40 58 30 e2 28 70 df 32 2c ed ef 9d 67 ..G@X0.(p.2,...g 00:22:29.540 00000160 97 59 75 5b 58 06 3b a5 87 41 2d 2b 85 2a 6e 6e .Yu[X.;..A-+.*nn 00:22:29.540 00000170 76 f8 7c 82 de d5 18 a5 b6 00 a6 76 d1 a1 c3 5c v.|........v...\ 00:22:29.540 [2024-09-27 13:27:22.987532] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key3, hash=3, dhgroup=2, seq=3775755290, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.540 [2024-09-27 13:27:22.987987] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.540 [2024-09-27 13:27:22.996051] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.540 [2024-09-27 13:27:22.999852] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.540 [2024-09-27 13:27:23.000116] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.540 [2024-09-27 13:27:23.000532] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.540 [2024-09-27 13:27:23.102757] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.540 [2024-09-27 13:27:23.103100] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.540 [2024-09-27 13:27:23.103243] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 2 (ffdhe3072) 00:22:29.540 [2024-09-27 13:27:23.103448] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.540 [2024-09-27 13:27:23.103743] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.540 ctrlr pubkey: 00:22:29.540 00000000 ca b0 42 c7 86 7b 14 f5 fd ae 90 a9 13 e7 ef 33 ..B..{.........3 00:22:29.540 00000010 fb d5 8e 53 9c 3b 53 2b 12 5d ec 5d 21 33 d2 7d ...S.;S+.].]!3.} 00:22:29.540 00000020 40 75 e2 8a ee 80 14 ed 72 4f c5 7a c4 7c a0 6a @u......rO.z.|.j 00:22:29.540 00000030 7c a9 98 37 7b 3a 4b dc b0 47 6c fb 74 df a9 63 |..7{:K..Gl.t..c 00:22:29.540 00000040 82 61 65 a1 82 46 d4 dd 44 87 7f 2b 36 96 0f a5 .ae..F..D..+6... 00:22:29.540 00000050 b2 34 39 d1 95 10 0f c7 7b 39 a5 08 68 e9 af 8c .49.....{9..h... 00:22:29.540 00000060 c8 e1 97 34 ff 11 bd 44 9e 62 05 99 28 b0 aa f9 ...4...D.b..(... 00:22:29.540 00000070 92 05 39 fd 51 cc da b9 6c c7 e1 2c db 4f d9 ce ..9.Q...l..,.O.. 00:22:29.540 00000080 09 e6 3a 44 cc 38 1d 28 47 2a d7 f6 7d c0 2d dc ..:D.8.(G*..}.-. 00:22:29.540 00000090 2a f6 a3 2f 8f 41 ed eb ce ae 10 08 4c 90 08 60 *../.A......L..` 00:22:29.540 000000a0 2d 9e f7 9c 01 84 7c df 3d 77 1a 0b 60 67 a4 5e -.....|.=w..`g.^ 00:22:29.540 000000b0 ad d4 c6 94 27 5d 10 ca 7f eb 8a d5 c1 e4 06 73 ....'].........s 00:22:29.540 000000c0 81 a4 93 73 04 e3 65 42 fa 2f 04 1c 28 fd 1a 62 ...s..eB./..(..b 00:22:29.540 000000d0 21 8d 6d cd db 03 8c ad 03 08 8e 6a de 23 fc aa !.m........j.#.. 00:22:29.540 000000e0 5b 99 56 72 d5 c1 50 bc b5 dd d0 a0 d0 5a 98 e5 [.Vr..P......Z.. 00:22:29.540 000000f0 3e 31 05 21 1c d7 68 0a 93 75 23 46 e4 28 08 76 >1.!..h..u#F.(.v 00:22:29.540 00000100 7c 18 42 7d e1 ee 2d 22 8c 7e 9e a1 40 de d8 be |.B}..-".~..@... 00:22:29.540 00000110 19 78 0f f7 fd a9 87 25 e9 92 70 31 60 76 1a 4c .x.....%..p1`v.L 00:22:29.540 00000120 2a 08 d3 01 e1 7c 2c a5 78 cf a6 e0 ce d3 00 19 *....|,.x....... 00:22:29.540 00000130 16 9a e1 d9 18 6c ae a1 00 14 0f 5d d1 a5 11 c1 .....l.....].... 00:22:29.540 00000140 56 54 8e 5f 17 bc 20 87 ec c3 cd f9 ad d5 a5 4e VT._.. ........N 00:22:29.540 00000150 4f 2e f8 e7 eb c9 c4 21 46 a0 21 eb 39 a4 52 3b O......!F.!.9.R; 00:22:29.540 00000160 95 3e 3d 30 5c d6 ee e2 d3 bd a9 7f d3 9e 37 f1 .>=0\.........7. 00:22:29.540 00000170 14 5a 85 ec ac b8 a5 9f 7e ae 01 4c db 6d c5 09 .Z......~..L.m.. 00:22:29.540 host pubkey: 00:22:29.540 00000000 51 17 2a bc fa 47 b2 46 e2 b5 25 59 04 78 c2 0f Q.*..G.F..%Y.x.. 00:22:29.540 00000010 5a b4 64 ce 00 13 41 ad 1c 9c e2 9e d0 a2 9c d6 Z.d...A......... 00:22:29.540 00000020 8a e3 60 1d 6d ee 49 7d 05 15 bc 60 2c 11 12 4c ..`.m.I}...`,..L 00:22:29.540 00000030 17 65 3f 6d ba 8e cb 8e 3b 7f c0 04 24 c7 b2 bb .e?m....;...$... 00:22:29.540 00000040 44 1b 9b 71 9b b5 eb 97 09 db 9e 5b 84 62 dc 2d D..q.......[.b.- 00:22:29.540 00000050 80 90 71 95 55 8c 3d 07 69 7b 19 58 1f fe 9a a0 ..q.U.=.i{.X.... 00:22:29.540 00000060 b3 e2 3d 56 ee e0 be e3 55 1a a9 ad b4 c6 d0 dd ..=V....U....... 00:22:29.540 00000070 10 97 d0 2a bd 8f 04 fe 66 6e 2a 1b 51 7c 2c 5a ...*....fn*.Q|,Z 00:22:29.540 00000080 55 38 93 3b 05 a9 a3 f7 09 1e f0 03 03 f1 cb 5d U8.;...........] 00:22:29.540 00000090 45 76 fb 78 9c e1 c1 8a fe a3 b9 94 1a fa 28 30 Ev.x..........(0 00:22:29.540 000000a0 92 41 8c f1 b1 0d 50 4a 8f 10 82 b9 ef e8 30 de .A....PJ......0. 00:22:29.540 000000b0 bd 60 1a 07 a0 d5 1c e3 c1 0d 07 be ae d4 f6 8d .`.............. 00:22:29.540 000000c0 ec a3 f4 c2 19 34 eb d4 33 75 be 33 80 b7 64 dd .....4..3u.3..d. 00:22:29.540 000000d0 69 b1 58 02 c2 c7 a5 83 a2 3d 36 2f 22 88 79 59 i.X......=6/".yY 00:22:29.540 000000e0 68 49 c4 e5 67 13 93 b9 f3 31 cd b2 75 50 91 66 hI..g....1..uP.f 00:22:29.540 000000f0 f3 73 d6 b4 bc 0b fe 80 b4 62 f0 a6 a8 1f b8 ad .s.......b...... 00:22:29.540 00000100 0e 55 c9 e1 16 d2 09 b3 47 00 00 d9 f0 b7 5f a7 .U......G....._. 00:22:29.540 00000110 28 d8 6a dd e3 4c f7 af 78 36 e3 31 62 9d e0 61 (.j..L..x6.1b..a 00:22:29.540 00000120 b5 54 89 a2 ec f1 7c d3 2d 05 02 f8 a1 39 22 7b .T....|.-....9"{ 00:22:29.540 00000130 fd 69 65 52 1f 70 7e 13 ef be b6 d1 4a e3 21 86 .ieR.p~.....J.!. 00:22:29.540 00000140 0a f7 70 93 47 6a 3a d5 fb c6 a6 54 ee 35 b9 5b ..p.Gj:....T.5.[ 00:22:29.540 00000150 78 5d 5c 72 ab b3 ac e0 66 2f ed ad 7c 51 b9 a4 x]\r....f/..|Q.. 00:22:29.540 00000160 fb 47 58 cc 0b e7 91 e9 18 4e ca 98 7b c7 19 30 .GX......N..{..0 00:22:29.540 00000170 d2 ef 9d 9a 78 de b9 f0 4f 14 ea ac b7 22 65 50 ....x...O...."eP 00:22:29.540 dh secret: 00:22:29.540 00000000 56 7f 91 f4 65 f5 ba 32 6e 94 ed 82 4a 4a cd 7c V...e..2n...JJ.| 00:22:29.540 00000010 f9 2d 38 72 ba 38 c1 9d 34 d3 ae 93 11 4e 54 ab .-8r.8..4....NT. 00:22:29.540 00000020 26 b6 e0 84 c4 be 09 77 83 c0 ba 0d c5 39 14 6f &......w.....9.o 00:22:29.540 00000030 be a6 15 d3 ea 6b 74 04 03 6d 0f ee 1d 16 ad 19 .....kt..m...... 00:22:29.540 00000040 14 d8 5f 50 dd 2e d3 eb 21 4c 05 e7 b6 9f 7b 0c .._P....!L....{. 00:22:29.540 00000050 6b 7b 31 f3 53 23 1e 9c d1 4f de 2b 82 8d 39 0a k{1.S#...O.+..9. 00:22:29.540 00000060 cc 46 d6 7f 76 18 52 06 39 7a 3d be 40 df 7e d3 .F..v.R.9z=.@.~. 00:22:29.540 00000070 99 aa 3f da d2 c3 f7 7c 48 2b 2f 82 99 c8 1b 44 ..?....|H+/....D 00:22:29.540 00000080 30 ee 5b c8 d6 97 5f fb a1 03 23 72 7a c1 42 35 0.[..._...#rz.B5 00:22:29.540 00000090 63 82 da 6d 4e f2 c2 b2 b4 53 0f f0 5d 39 00 fe c..mN....S..]9.. 00:22:29.540 000000a0 cf 1e e2 17 bb e9 4c ad b8 81 05 a2 16 7d 9d 01 ......L......}.. 00:22:29.540 000000b0 a1 67 21 d6 30 8c be 4d 05 23 6b 80 37 b1 84 48 .g!.0..M.#k.7..H 00:22:29.540 000000c0 5f 57 df ef 43 99 c2 2b e8 f0 a1 74 1d 1d e2 b2 _W..C..+...t.... 00:22:29.540 000000d0 62 d4 4f 32 72 52 c5 4b 64 53 0e ae dd e7 56 2a b.O2rR.KdS....V* 00:22:29.540 000000e0 11 a0 1f 8d 10 bc 4e fb f9 02 39 b5 a8 78 b8 63 ......N...9..x.c 00:22:29.540 000000f0 b8 8b 5f aa 2e e1 8f 11 b3 22 07 e4 fe c9 4b f5 .._......"....K. 00:22:29.540 00000100 46 f6 d7 b6 7d a0 3f ea fd 0d 52 ed f5 a9 07 28 F...}.?...R....( 00:22:29.540 00000110 e2 7f 0a 13 ad 69 9a d6 0f 2b 86 e3 89 78 79 eb .....i...+...xy. 00:22:29.540 00000120 81 1a a6 4a 28 4e 32 ed 15 ef a5 45 42 59 44 7b ...J(N2....EBYD{ 00:22:29.540 00000130 93 04 f2 78 29 20 99 3b ca 08 d7 79 d3 38 22 c1 ...x) .;...y.8". 00:22:29.540 00000140 9d 27 0d 73 dc 18 59 2b 2e 9c 72 63 0e c0 f1 09 .'.s..Y+..rc.... 00:22:29.540 00000150 a7 fe ae fc 61 af 61 08 5f 56 16 15 ce 4a 80 43 ....a.a._V...J.C 00:22:29.540 00000160 a8 de bd 86 e6 67 01 32 47 f6 c5 5f b6 ad d7 0c .....g.2G.._.... 00:22:29.540 00000170 7e bf 6b 2e 7b e9 3a 1c 5b 6c 25 ae 82 58 9a 7d ~.k.{.:.[l%..X.} 00:22:29.540 [2024-09-27 13:27:23.118398] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key4, hash=3, dhgroup=2, seq=3775755291, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.540 [2024-09-27 13:27:23.118916] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.540 [2024-09-27 13:27:23.126491] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.540 [2024-09-27 13:27:23.126889] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.540 [2024-09-27 13:27:23.126995] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.540 [2024-09-27 13:27:23.179026] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.540 [2024-09-27 13:27:23.179404] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.540 [2024-09-27 13:27:23.179574] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 2 (ffdhe3072) 00:22:29.540 [2024-09-27 13:27:23.180474] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.540 [2024-09-27 13:27:23.180691] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.540 ctrlr pubkey: 00:22:29.540 00000000 ca b0 42 c7 86 7b 14 f5 fd ae 90 a9 13 e7 ef 33 ..B..{.........3 00:22:29.540 00000010 fb d5 8e 53 9c 3b 53 2b 12 5d ec 5d 21 33 d2 7d ...S.;S+.].]!3.} 00:22:29.540 00000020 40 75 e2 8a ee 80 14 ed 72 4f c5 7a c4 7c a0 6a @u......rO.z.|.j 00:22:29.540 00000030 7c a9 98 37 7b 3a 4b dc b0 47 6c fb 74 df a9 63 |..7{:K..Gl.t..c 00:22:29.540 00000040 82 61 65 a1 82 46 d4 dd 44 87 7f 2b 36 96 0f a5 .ae..F..D..+6... 00:22:29.540 00000050 b2 34 39 d1 95 10 0f c7 7b 39 a5 08 68 e9 af 8c .49.....{9..h... 00:22:29.540 00000060 c8 e1 97 34 ff 11 bd 44 9e 62 05 99 28 b0 aa f9 ...4...D.b..(... 00:22:29.540 00000070 92 05 39 fd 51 cc da b9 6c c7 e1 2c db 4f d9 ce ..9.Q...l..,.O.. 00:22:29.540 00000080 09 e6 3a 44 cc 38 1d 28 47 2a d7 f6 7d c0 2d dc ..:D.8.(G*..}.-. 00:22:29.540 00000090 2a f6 a3 2f 8f 41 ed eb ce ae 10 08 4c 90 08 60 *../.A......L..` 00:22:29.540 000000a0 2d 9e f7 9c 01 84 7c df 3d 77 1a 0b 60 67 a4 5e -.....|.=w..`g.^ 00:22:29.540 000000b0 ad d4 c6 94 27 5d 10 ca 7f eb 8a d5 c1 e4 06 73 ....'].........s 00:22:29.540 000000c0 81 a4 93 73 04 e3 65 42 fa 2f 04 1c 28 fd 1a 62 ...s..eB./..(..b 00:22:29.540 000000d0 21 8d 6d cd db 03 8c ad 03 08 8e 6a de 23 fc aa !.m........j.#.. 00:22:29.540 000000e0 5b 99 56 72 d5 c1 50 bc b5 dd d0 a0 d0 5a 98 e5 [.Vr..P......Z.. 00:22:29.540 000000f0 3e 31 05 21 1c d7 68 0a 93 75 23 46 e4 28 08 76 >1.!..h..u#F.(.v 00:22:29.540 00000100 7c 18 42 7d e1 ee 2d 22 8c 7e 9e a1 40 de d8 be |.B}..-".~..@... 00:22:29.540 00000110 19 78 0f f7 fd a9 87 25 e9 92 70 31 60 76 1a 4c .x.....%..p1`v.L 00:22:29.540 00000120 2a 08 d3 01 e1 7c 2c a5 78 cf a6 e0 ce d3 00 19 *....|,.x....... 00:22:29.540 00000130 16 9a e1 d9 18 6c ae a1 00 14 0f 5d d1 a5 11 c1 .....l.....].... 00:22:29.540 00000140 56 54 8e 5f 17 bc 20 87 ec c3 cd f9 ad d5 a5 4e VT._.. ........N 00:22:29.540 00000150 4f 2e f8 e7 eb c9 c4 21 46 a0 21 eb 39 a4 52 3b O......!F.!.9.R; 00:22:29.540 00000160 95 3e 3d 30 5c d6 ee e2 d3 bd a9 7f d3 9e 37 f1 .>=0\.........7. 00:22:29.540 00000170 14 5a 85 ec ac b8 a5 9f 7e ae 01 4c db 6d c5 09 .Z......~..L.m.. 00:22:29.540 host pubkey: 00:22:29.540 00000000 62 e6 7d 83 6a 2c ea 53 51 19 e7 47 d5 0b 8c ce b.}.j,.SQ..G.... 00:22:29.540 00000010 72 9b fc 06 57 4a 7d 81 27 60 bb ed 53 d7 6f fe r...WJ}.'`..S.o. 00:22:29.540 00000020 43 0b 31 ae 8b 51 6a 92 c5 86 8f 23 64 61 a7 92 C.1..Qj....#da.. 00:22:29.540 00000030 42 7b 91 21 55 90 6a 10 e6 0a 58 ad 39 23 58 45 B{.!U.j...X.9#XE 00:22:29.540 00000040 2a 26 cc 13 fe 4b f5 a8 7b 1e 14 20 f2 39 27 13 *&...K..{.. .9'. 00:22:29.540 00000050 37 bb 7b 24 66 13 89 ff 07 8a 19 04 77 7a 42 3e 7.{$f.......wzB> 00:22:29.540 00000060 4b 27 b4 d4 02 16 ae c7 4b 7f 23 b3 6e e3 a4 38 K'......K.#.n..8 00:22:29.541 00000070 91 03 aa 1f 9e c3 e8 d4 52 af ff 46 b7 97 66 2e ........R..F..f. 00:22:29.541 00000080 74 1b 00 25 19 a8 b6 17 f2 ce 10 58 49 1a d7 93 t..%.......XI... 00:22:29.541 00000090 f2 20 1d c2 24 c1 dc 5c d1 1e 42 a3 29 a9 de 85 . ..$..\..B.)... 00:22:29.541 000000a0 3a da 1d 6c ce be a3 a5 98 5e 48 fa 40 72 0c 3f :..l.....^H.@r.? 00:22:29.541 000000b0 5d a3 ad 43 5b be a6 0b 2d 1e 77 ed d5 3a f2 ab ]..C[...-.w..:.. 00:22:29.541 000000c0 5c f9 d9 a3 dc 65 d3 37 88 69 21 70 ac 37 52 b2 \....e.7.i!p.7R. 00:22:29.541 000000d0 87 98 6d 34 9a 6c 3f f2 9e 04 17 47 10 ab cb 08 ..m4.l?....G.... 00:22:29.541 000000e0 75 ca 22 b2 58 ba 55 47 43 e1 4b 69 d4 c1 1a 30 u.".X.UGC.Ki...0 00:22:29.541 000000f0 db b2 98 b0 a5 37 a7 07 a0 e6 3f 3b a1 33 cb 3d .....7....?;.3.= 00:22:29.541 00000100 c0 c6 69 44 f5 8c 1a 2f 7c 22 1d 3d c7 5b de 16 ..iD.../|".=.[.. 00:22:29.541 00000110 8e 70 3a 69 2a 00 72 83 2d 91 9d 2c 0a a2 2b a1 .p:i*.r.-..,..+. 00:22:29.541 00000120 e6 6f bf 35 55 9f 60 f5 42 e8 bd 0c 20 e9 db fa .o.5U.`.B... ... 00:22:29.541 00000130 d4 25 5a d1 d1 c2 ac 20 7e 3b e0 84 b3 7a 58 03 .%Z.... ~;...zX. 00:22:29.541 00000140 34 8b 18 8e 6f 6b 6a d2 9b eb 63 f3 d9 a2 c3 cd 4...okj...c..... 00:22:29.541 00000150 5f 9a 32 ac 42 13 c6 c5 43 41 25 d1 16 92 9f 35 _.2.B...CA%....5 00:22:29.541 00000160 f6 49 f6 55 14 84 f9 4c 7d 1e dd 65 be ac 20 fd .I.U...L}..e.. . 00:22:29.541 00000170 dc 8b f6 be 26 5d 38 a0 16 6d 7b 47 08 36 9f e9 ....&]8..m{G.6.. 00:22:29.541 dh secret: 00:22:29.541 00000000 ed e5 57 21 a0 7d 72 5e 2c 3b 7c bb 44 40 39 dc ..W!.}r^,;|.D@9. 00:22:29.541 00000010 ba 99 ca 05 49 d8 d7 9f ec c2 07 6a 4b 06 5b 72 ....I......jK.[r 00:22:29.541 00000020 dd ea 4a f7 f0 0a 22 dd 64 d3 b0 b8 59 25 98 6c ..J...".d...Y%.l 00:22:29.541 00000030 2e b0 f8 02 a6 15 4e 1f 60 dd 2a aa ee 2e f7 ae ......N.`.*..... 00:22:29.541 00000040 84 0a c8 55 9e 17 17 b1 a5 36 46 0e f6 34 76 51 ...U.....6F..4vQ 00:22:29.541 00000050 47 89 02 6c 62 62 e1 a2 9c 7d ea 2e 56 15 1b b3 G..lbb...}..V... 00:22:29.541 00000060 d2 49 73 0b 58 fa 1a e6 04 b0 69 c0 fb 24 de 64 .Is.X.....i..$.d 00:22:29.541 00000070 09 b0 1c eb 34 8c f5 8e d0 b2 72 d7 64 cf c6 24 ....4.....r.d..$ 00:22:29.541 00000080 74 47 12 7e 0d 04 fc 56 4d 9e 1a bd b4 d1 12 72 tG.~...VM......r 00:22:29.541 00000090 e6 c9 cf 10 9d 01 24 de 1d c9 ce a3 ff 99 f1 45 ......$........E 00:22:29.541 000000a0 ed 95 a4 fd 01 8c 9b 84 ff 11 18 8b b8 28 30 f0 .............(0. 00:22:29.541 000000b0 38 2e e0 47 f9 95 15 71 7b e7 0f eb 84 35 32 bf 8..G...q{....52. 00:22:29.541 000000c0 e0 6b 57 59 0f 0e 93 bf 4a 4e 6c 9f 0c 07 2f a5 .kWY....JNl.../. 00:22:29.541 000000d0 e3 cd bf dd 40 37 88 17 ed a1 54 c8 86 64 da 82 ....@7....T..d.. 00:22:29.541 000000e0 72 34 fc 1a 61 a7 a3 0e 80 2f cf 34 7e 58 a4 85 r4..a..../.4~X.. 00:22:29.541 000000f0 2b 2a f0 e2 d6 3c b8 34 2f f1 1b d8 9f 14 e6 fa +*...<.4/....... 00:22:29.541 00000100 5b ab 1b 42 3c fb af 25 1e b3 6d 6a 13 b0 ff 1d [..B<..%..mj.... 00:22:29.541 00000110 5c 30 b0 ec 08 4a 60 da 91 30 d3 80 3f e0 b2 2e \0...J`..0..?... 00:22:29.541 00000120 d3 c6 bc b7 d6 17 fb 55 6a fa 48 db bc 56 81 74 .......Uj.H..V.t 00:22:29.541 00000130 3c 68 d3 4b e9 fc 46 72 f7 7f b7 92 96 5d 81 1a "V.a. 00:22:29.541 00000050 c0 36 6a 43 f1 67 d4 40 64 d4 22 19 14 38 fc 93 .6jC.g.@d."..8.. 00:22:29.541 00000060 e0 77 1d 69 2b 07 cc a5 a2 9c 89 88 81 d8 51 a2 .w.i+.........Q. 00:22:29.541 00000070 cd 94 37 65 92 c7 93 9d c4 73 50 18 0f b2 5a 83 ..7e.....sP...Z. 00:22:29.541 00000080 c4 b5 19 2c ae 27 42 f3 99 7e 59 0a 68 b2 1e 47 ...,.'B..~Y.h..G 00:22:29.541 00000090 c1 ec 23 12 c1 46 d9 9d cc d5 97 fe 8f 6e c2 45 ..#..F.......n.E 00:22:29.541 000000a0 5f 87 6b 00 51 06 2a b7 d0 e5 48 c3 0f 05 ad 42 _.k.Q.*...H....B 00:22:29.541 000000b0 18 20 0a fe da 36 cc ee 11 6a a4 bc cc aa eb 44 . ...6...j.....D 00:22:29.541 000000c0 cc 5a c9 ef 30 9d d3 ab 3d 42 bc 05 52 0b 08 e5 .Z..0...=B..R... 00:22:29.541 000000d0 27 d4 c1 67 7a d2 54 07 b7 aa 76 a6 9a f5 47 11 '..gz.T...v...G. 00:22:29.541 000000e0 47 a9 bd fa 22 f2 00 fc 10 d0 31 ff b9 38 05 cf G...".....1..8.. 00:22:29.541 000000f0 2e 48 0e 4e e9 16 20 fa 9a 28 d9 94 86 3c 6a 47 .H.N.. ..(....e.G?t. 00:22:29.541 00000120 be 36 a0 4d c8 49 3d f2 86 e4 69 87 87 6f 29 fb .6.M.I=...i..o). 00:22:29.541 00000130 ba 28 4f 3e 0d 8c 02 1d af f4 fa b4 4e e7 c8 4e .(O>........N..N 00:22:29.541 00000140 9b a8 a4 ef 34 41 67 ff 1c 61 85 3f 4a f3 f0 c3 ....4Ag..a.?J... 00:22:29.541 00000150 1c f9 b0 4c 49 78 c6 d1 e4 00 4c d3 94 05 29 5b ...LIx....L...)[ 00:22:29.541 00000160 65 10 26 0b c1 ed 2c 3e 70 89 a8 cd 8f 73 af 70 e.&...,>p....s.p 00:22:29.541 00000170 0e b0 ae 3f a5 4b 03 fa e5 f9 6a 64 aa 04 88 dd ...?.K....jd.... 00:22:29.541 00000180 44 2e 1b a0 22 a3 f2 92 1e 1b 8d 9f 18 d3 6d dc D...".........m. 00:22:29.541 00000190 5e 4b 9e 4e bf 15 25 f7 84 4c 21 c4 43 58 8c 26 ^K.N..%..L!.CX.& 00:22:29.541 000001a0 b2 8f 28 c1 46 a5 16 b7 00 31 79 d2 7f 4d 42 6c ..(.F....1y..MBl 00:22:29.541 000001b0 3c 23 61 32 c1 08 fb 33 2f db 62 6d f9 28 5c 13 <#a2...3/.bm.(\. 00:22:29.541 000001c0 45 c1 4d 52 37 8d 82 d5 a4 31 65 16 38 f1 93 90 E.MR7....1e.8... 00:22:29.541 000001d0 3f 14 aa a5 12 db e2 6b ed 66 16 7d ca c0 52 16 ?......k.f.}..R. 00:22:29.541 000001e0 59 c8 c9 f5 b1 ef 1d 65 88 da e7 35 a5 53 61 4d Y......e...5.SaM 00:22:29.541 000001f0 fc 62 45 55 95 54 fa fe 52 90 07 6c ea 64 2b 62 .bEU.T..R..l.d+b 00:22:29.541 dh secret: 00:22:29.541 00000000 db 0e 47 ba 6e 33 63 d4 e0 b8 0d 5b 4f b5 bc 3d ..G.n3c....[O..= 00:22:29.541 00000010 21 9a 0e ed 6a 6f 84 cd ce 77 a4 95 a0 87 e6 20 !...jo...w..... 00:22:29.541 00000020 54 c0 54 f8 d8 dc 02 08 3d a5 0e 56 0f fe 82 16 T.T.....=..V.... 00:22:29.541 00000030 9d a3 e2 9d 49 dd 5b 7f 42 e3 10 08 6b 1a 6a cb ....I.[.B...k.j. 00:22:29.541 00000040 ff bd 43 0d 0f 45 d2 89 1b 5f f1 e1 3b d3 d4 71 ..C..E..._..;..q 00:22:29.541 00000050 12 5f 28 0a b9 f8 2e 35 e5 51 16 04 8b 34 d0 96 ._(....5.Q...4.. 00:22:29.541 00000060 a4 03 55 7c 3d a7 ad 77 fe 39 93 46 b3 ab 83 02 ..U|=..w.9.F.... 00:22:29.541 00000070 1e d4 75 6c 1e cb 59 5b 52 8b a5 07 fb f4 b4 73 ..ul..Y[R......s 00:22:29.541 00000080 33 d9 b2 08 08 a2 d2 b9 ab 70 72 93 7e 31 37 b5 3........pr.~17. 00:22:29.541 00000090 6a 50 f2 29 d9 df fe ee 66 02 d3 c3 f6 11 21 5b jP.)....f.....![ 00:22:29.541 000000a0 b5 66 d7 a1 b5 70 70 d6 2a 71 24 7c c9 89 41 b4 .f...pp.*q$|..A. 00:22:29.541 000000b0 29 c8 d5 f1 70 9b 15 83 08 b0 72 77 a3 9b 4e 44 )...p.....rw..ND 00:22:29.541 000000c0 89 e8 7c 3b 87 df cf 80 c2 76 31 42 7a a2 2c 39 ..|;.....v1Bz.,9 00:22:29.541 000000d0 2b 8e 65 3b e4 9f b1 8f e6 92 97 c6 ab 47 af 93 +.e;.........G.. 00:22:29.541 000000e0 31 cb 38 a7 26 d3 f8 b4 09 03 0b cf 2a 21 47 dc 1.8.&.......*!G. 00:22:29.542 000000f0 15 f5 9a f8 7a bb 4d be 28 32 3c 8b ef 88 12 c7 ....z.M.(2<..... 00:22:29.542 00000100 ba 91 b4 31 18 94 ba e8 20 5d 48 2b 95 34 1b 1a ...1.... ]H+.4.. 00:22:29.542 00000110 01 2d f5 d7 88 35 b0 7d fd ee af 4e 26 c2 01 20 .-...5.}...N&.. 00:22:29.542 00000120 14 5a 3f bc c5 9f 62 e2 d2 5d 2c f4 94 39 dd cc .Z?...b..],..9.. 00:22:29.542 00000130 8e 81 8b 05 79 6e 0a db cf 51 59 4d f5 36 76 92 ....yn...QYM.6v. 00:22:29.542 00000140 81 63 bb f4 74 5d 9a df 5e d2 a3 dc 5e 93 ea d7 .c..t]..^...^... 00:22:29.542 00000150 15 82 c6 4f 92 59 b7 7f 4d 79 3b 78 7b 18 30 1d ...O.Y..My;x{.0. 00:22:29.542 00000160 a3 f7 7d 65 b5 db 55 57 da ca 9f 5b cd e1 85 49 ..}e..UW...[...I 00:22:29.542 00000170 14 ed 06 31 4d c2 a8 20 a4 52 a5 1e c5 eb 8c ff ...1M.. .R...... 00:22:29.542 00000180 4d f5 43 7e c8 a4 d4 a4 c1 32 ab a3 b5 44 41 7e M.C~.....2...DA~ 00:22:29.542 00000190 01 aa df 4b 1b 92 28 b2 22 29 96 d6 17 06 60 a0 ...K..(.")....`. 00:22:29.542 000001a0 42 98 22 c7 fa 41 50 b0 46 89 69 91 28 9e 83 8d B."..AP.F.i.(... 00:22:29.542 000001b0 b0 d1 82 ec db bf e2 db bb 8c 04 30 f6 ab 78 13 ...........0..x. 00:22:29.542 000001c0 8d ea de 7f b6 00 bd f2 27 30 88 64 cf 0b e5 07 ........'0.d.... 00:22:29.542 000001d0 e3 d7 52 64 a9 0e 16 de 64 c2 53 91 59 46 48 83 ..Rd....d.S.YFH. 00:22:29.542 000001e0 d4 3a 9c 7e 53 13 04 de 42 4f 3d 02 e0 2d 82 af .:.~S...BO=..-.. 00:22:29.542 000001f0 59 4a fd c9 e1 70 03 bc ad c8 87 6a 4e 8e 8d b8 YJ...p.....jN... 00:22:29.542 [2024-09-27 13:27:23.343819] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key0, hash=3, dhgroup=3, seq=3775755293, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.542 [2024-09-27 13:27:23.344140] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.542 [2024-09-27 13:27:23.367812] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.542 [2024-09-27 13:27:23.368169] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.542 [2024-09-27 13:27:23.368475] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.542 [2024-09-27 13:27:23.368633] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.542 [2024-09-27 13:27:23.420893] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.542 [2024-09-27 13:27:23.421099] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.542 [2024-09-27 13:27:23.421402] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 3 (ffdhe4096) 00:22:29.542 [2024-09-27 13:27:23.421603] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.542 [2024-09-27 13:27:23.421890] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.542 ctrlr pubkey: 00:22:29.542 00000000 c7 80 44 c2 73 c9 b0 cf 78 e6 29 8e c6 51 3a 04 ..D.s...x.)..Q:. 00:22:29.542 00000010 cd a2 be 43 2b 7b 13 48 92 d5 6e 4b 4b e3 fa de ...C+{.H..nKK... 00:22:29.542 00000020 0f cb a6 06 ad 72 24 0a 35 60 b9 d3 64 35 39 b2 .....r$.5`..d59. 00:22:29.542 00000030 7f 36 0b 6c 22 ba 3a 25 19 82 49 a0 33 80 bd 64 .6.l".:%..I.3..d 00:22:29.542 00000040 f8 63 21 e2 16 b6 77 d6 24 ad 3e 22 56 b9 61 05 .c!...w.$.>"V.a. 00:22:29.542 00000050 c0 36 6a 43 f1 67 d4 40 64 d4 22 19 14 38 fc 93 .6jC.g.@d."..8.. 00:22:29.542 00000060 e0 77 1d 69 2b 07 cc a5 a2 9c 89 88 81 d8 51 a2 .w.i+.........Q. 00:22:29.542 00000070 cd 94 37 65 92 c7 93 9d c4 73 50 18 0f b2 5a 83 ..7e.....sP...Z. 00:22:29.542 00000080 c4 b5 19 2c ae 27 42 f3 99 7e 59 0a 68 b2 1e 47 ...,.'B..~Y.h..G 00:22:29.542 00000090 c1 ec 23 12 c1 46 d9 9d cc d5 97 fe 8f 6e c2 45 ..#..F.......n.E 00:22:29.542 000000a0 5f 87 6b 00 51 06 2a b7 d0 e5 48 c3 0f 05 ad 42 _.k.Q.*...H....B 00:22:29.542 000000b0 18 20 0a fe da 36 cc ee 11 6a a4 bc cc aa eb 44 . ...6...j.....D 00:22:29.542 000000c0 cc 5a c9 ef 30 9d d3 ab 3d 42 bc 05 52 0b 08 e5 .Z..0...=B..R... 00:22:29.542 000000d0 27 d4 c1 67 7a d2 54 07 b7 aa 76 a6 9a f5 47 11 '..gz.T...v...G. 00:22:29.542 000000e0 47 a9 bd fa 22 f2 00 fc 10 d0 31 ff b9 38 05 cf G...".....1..8.. 00:22:29.542 000000f0 2e 48 0e 4e e9 16 20 fa 9a 28 d9 94 86 3c 6a 47 .H.N.. ..(...j..d 00:22:29.542 00000050 1a 08 a0 7a d3 78 ac d4 49 a0 45 ab e7 d5 1a 7b ...z.x..I.E....{ 00:22:29.542 00000060 12 b4 8a 62 fc 79 cf a4 95 e5 50 f5 06 8f 8e 5a ...b.y....P....Z 00:22:29.542 00000070 4f 22 70 72 75 27 ec 24 44 96 65 22 47 d1 fd 54 O"pru'.$D.e"G..T 00:22:29.542 00000080 5e 56 97 92 97 a5 ac 25 2e 61 56 26 4e b1 a5 a6 ^V.....%.aV&N... 00:22:29.542 00000090 ee bf f0 2d c1 33 c5 77 77 7b 7f b5 47 a5 00 64 ...-.3.ww{..G..d 00:22:29.542 000000a0 bf 67 e1 08 28 a1 a9 2d cd 64 b2 fd 92 ce 03 b6 .g..(..-.d...... 00:22:29.542 000000b0 ea 10 cf 9b a9 ac b0 6c d8 52 7b 64 7c 3a 67 b4 .......l.R{d|:g. 00:22:29.542 000000c0 6d df 85 52 2d 01 c3 2e 1a d7 8d 63 08 00 b1 a0 m..R-......c.... 00:22:29.542 000000d0 be 6a 72 d7 e1 67 b5 29 f3 9b 6b 8d b3 15 22 7f .jr..g.)..k...". 00:22:29.542 000000e0 d3 69 76 34 3c c0 54 94 e9 4b 33 63 6d af 6b 0d .iv4<.T..K3cm.k. 00:22:29.542 000000f0 70 4f 57 d2 f0 e2 c6 f4 6e 57 0b 82 e2 77 71 ae pOW.....nW...wq. 00:22:29.542 00000100 a5 26 39 8a c8 bd dc 63 16 88 07 4a 53 9b 8c ee .&9....c...JS... 00:22:29.542 00000110 00 f7 41 31 a2 92 39 77 32 27 39 85 0d 94 58 5a ..A1..9w2'9...XZ 00:22:29.542 00000120 ad 8d 66 f2 ee 31 bc 61 b5 03 56 0e 3e 57 ec ab ..f..1.a..V.>W.. 00:22:29.542 00000130 55 b8 de 3a 4e 1b 5f d6 72 68 4e a1 9e 09 28 60 U..:N._.rhN...(` 00:22:29.542 00000140 95 ca df 87 cb 25 cf 57 57 be 5d 07 98 a4 3d 62 .....%.WW.]...=b 00:22:29.542 00000150 9e 9e 3f 3b 67 20 ac 74 78 b5 16 03 94 a6 d3 4f ..?;g .tx......O 00:22:29.542 00000160 5d 76 9a ae 3c 73 b8 b4 83 06 83 75 d7 08 48 02 ]v.. 00:22:29.542 00000130 92 0f ed 6e 7c 61 e1 e2 8a 37 fb a2 61 bb 26 07 ...n|a...7..a.&. 00:22:29.542 00000140 e0 f8 51 da d1 4c 95 4c d4 af 01 46 5e aa 5c d8 ..Q..L.L...F^.\. 00:22:29.542 00000150 87 cd 75 6e 6b 74 98 cb 47 91 14 66 a5 46 50 a9 ..unkt..G..f.FP. 00:22:29.542 00000160 6e 88 6f ad c4 17 7c c3 d8 69 a8 b7 2a c1 dc c7 n.o...|..i..*... 00:22:29.542 00000170 85 74 66 35 b1 98 71 89 31 80 2a 49 91 eb c5 21 .tf5..q.1.*I...! 00:22:29.542 00000180 33 88 12 65 69 ce 1e fd 22 9c 34 7d d3 a3 60 59 3..ei...".4}..`Y 00:22:29.542 00000190 2c ab fc 85 d4 35 06 0e a4 62 28 aa c3 c8 c2 64 ,....5...b(....d 00:22:29.542 000001a0 b6 ab 1a 20 ae bb 05 c8 94 bb b4 82 43 40 a6 b2 ... ........C@.. 00:22:29.542 000001b0 66 f8 d8 97 1a 29 8b cd 41 e9 40 66 b6 2d b4 60 f....)..A.@f.-.` 00:22:29.542 000001c0 12 fb c3 76 63 34 4c b5 26 9c 1b 5f 6c b4 31 a1 ...vc4L.&.._l.1. 00:22:29.542 000001d0 6d ed 77 6a 6c b6 2d ca 7e 50 8e c2 b2 68 fc 8d m.wjl.-.~P...h.. 00:22:29.542 000001e0 78 bb e6 81 03 7b f0 b4 20 8e a1 bd 75 99 8d b3 x....{.. ...u... 00:22:29.542 000001f0 63 29 71 a4 fa 95 cd 97 12 33 ff a6 cb 73 48 56 c)q......3...sHV 00:22:29.542 [2024-09-27 13:27:23.449299] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key0, hash=3, dhgroup=3, seq=3775755294, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.542 [2024-09-27 13:27:23.449610] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.542 [2024-09-27 13:27:23.473985] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.542 [2024-09-27 13:27:23.474406] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.542 [2024-09-27 13:27:23.474652] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.542 [2024-09-27 13:27:23.474849] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.542 [2024-09-27 13:27:23.583665] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.542 [2024-09-27 13:27:23.583877] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.542 [2024-09-27 13:27:23.584407] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 3 (ffdhe4096) 00:22:29.542 [2024-09-27 13:27:23.584540] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.542 [2024-09-27 13:27:23.584920] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.542 ctrlr pubkey: 00:22:29.542 00000000 88 6e e0 e2 aa 70 24 aa cc ef 69 8f 05 66 d7 6c .n...p$...i..f.l 00:22:29.542 00000010 3c b0 86 3e 83 27 d2 0b 66 5e c4 9f b0 66 7c ac <..>.'..f^...f|. 00:22:29.542 00000020 d8 7e f1 08 04 2f c1 6e 9a 4f 91 a6 97 74 e0 7f .~.../.n.O...t.. 00:22:29.542 00000030 0d 14 56 69 0e da 3b bf 16 ac df d1 69 63 56 bd ..Vi..;.....icV. 00:22:29.542 00000040 0f 04 e9 3d 61 8b d1 ac ec 30 e6 fb a0 8f 7c 59 ...=a....0....|Y 00:22:29.542 00000050 14 51 c2 9b 3f 1c 7c 12 9e 61 75 5d f5 c0 e5 aa .Q..?.|..au].... 00:22:29.542 00000060 46 1d 7a a6 c2 ca fa 48 3c 05 27 85 3b b6 96 93 F.z....H<.'.;... 00:22:29.542 00000070 24 7e be 29 89 bb 84 2e 0d 06 34 18 77 bd 4f f7 $~.)......4.w.O. 00:22:29.542 00000080 01 47 74 ad ed 3f af ab f1 c6 a7 1b a5 2a 01 ac .Gt..?.......*.. 00:22:29.543 00000090 84 e9 63 08 6a 04 f3 0f cb 0d e9 b2 09 56 12 db ..c.j........V.. 00:22:29.543 000000a0 55 93 4f 35 61 5e b8 1d 1d 86 ad c7 09 0b 74 cf U.O5a^........t. 00:22:29.543 000000b0 8f 07 72 7b 75 6f 43 40 74 6a 35 7f 4c c5 27 82 ..r{uoC@tj5.L.'. 00:22:29.543 000000c0 39 3a ed db 32 55 4e e4 2c b8 56 a5 b6 17 09 97 9:..2UN.,.V..... 00:22:29.543 000000d0 2d 9a e7 a9 32 12 bb 8b 0a 40 97 c9 0f 5e 09 a0 -...2....@...^.. 00:22:29.543 000000e0 b5 f5 06 aa 97 42 2b 1a 11 81 79 17 0d a9 34 75 .....B+...y...4u 00:22:29.543 000000f0 f7 7c 05 90 db e8 65 81 5d 99 94 32 64 16 15 30 .|....e.]..2d..0 00:22:29.543 00000100 28 e2 f5 00 c6 dd db b5 cb 7c 03 0e c2 68 86 ff (........|...h.. 00:22:29.543 00000110 0a d9 57 ac 9f a7 57 65 fa d2 54 81 89 67 33 62 ..W...We..T..g3b 00:22:29.543 00000120 8b 12 4c 80 7a ce 9e 28 9f b9 d5 eb ff 8d bb ec ..L.z..(........ 00:22:29.543 00000130 fa de d2 c3 2a e7 d4 46 51 af 52 c6 bf cd aa cb ....*..FQ.R..... 00:22:29.543 00000140 31 10 3d 1a 8e 63 7e 01 46 ee d8 c6 d2 76 11 09 1.=..c~.F....v.. 00:22:29.543 00000150 2f c2 1f 8c 7a 76 c7 64 64 46 2a 84 71 6d 0d 4d /...zv.ddF*.qm.M 00:22:29.543 00000160 d6 0b f4 98 8c 43 53 43 37 94 05 6e 79 4f c6 15 .....CSC7..nyO.. 00:22:29.543 00000170 d7 71 90 6c 18 78 b2 bb dd 9d 0b 96 36 39 bf f9 .q.l.x......69.. 00:22:29.543 00000180 e2 77 c7 ea 75 67 7c 9b 63 50 b0 c1 98 4f 14 f5 .w..ug|.cP...O.. 00:22:29.543 00000190 ed c9 e7 11 5c e6 34 be 9d 87 9e 5d de cc 86 70 ....\.4....]...p 00:22:29.543 000001a0 f5 88 00 7d 2a 3c e9 e4 2b 65 1e 0a 16 e1 d4 66 ...}*<..+e.....f 00:22:29.543 000001b0 47 1f ae 6e ac 2c 90 ed 42 3f b6 08 2a 8b f0 aa G..n.,..B?..*... 00:22:29.543 000001c0 ef 3a e4 a3 28 4c c7 84 ce ee ee 8e 4f cb 80 42 .:..(L......O..B 00:22:29.543 000001d0 52 7b 45 4d 5a 39 48 98 d7 a8 10 97 0e c8 0e 17 R{EMZ9H......... 00:22:29.543 000001e0 28 c5 45 65 1d 6c 6c ec 77 05 20 5b 38 a0 5f 77 (.Ee.ll.w. [8._w 00:22:29.543 000001f0 74 b9 28 c7 5b d5 11 33 2d 32 52 23 17 a9 81 ad t.(.[..3-2R#.... 00:22:29.543 host pubkey: 00:22:29.543 00000000 5a 06 f5 b4 67 5a a7 a1 f7 47 09 25 d8 03 ad d4 Z...gZ...G.%.... 00:22:29.543 00000010 95 55 76 48 91 14 68 5f 63 83 0e a2 56 1a 72 56 .UvH..h_c...V.rV 00:22:29.543 00000020 19 73 59 0c e2 a2 bb c6 2b fa c9 6a 3f 02 47 e9 .sY.....+..j?.G. 00:22:29.543 00000030 15 5e d1 da c9 74 a2 98 1f d6 9b ba a7 5e 1e 46 .^...t.......^.F 00:22:29.543 00000040 29 32 13 f7 26 df 09 b5 04 40 54 39 2f cc 27 c2 )2..&....@T9/.'. 00:22:29.543 00000050 9d 4b da ed 52 d9 6c 6c 11 0c 22 84 fc fa 24 8e .K..R.ll.."...$. 00:22:29.543 00000060 ef c4 50 5c a0 5e ec cd 5c a5 77 cf 4d 08 57 28 ..P\.^..\.w.M.W( 00:22:29.543 00000070 3a be f5 0e 3d 37 b6 84 15 1d f0 bf b2 9c 5c ca :...=7........\. 00:22:29.543 00000080 b7 f1 ff 3d 28 f9 ea 0b 76 e7 05 b0 a4 2e e2 6b ...=(...v......k 00:22:29.543 00000090 d7 5f 41 f7 fc 0b cf 9a 4b 67 41 18 b5 78 53 68 ._A.....KgA..xSh 00:22:29.543 000000a0 fe d4 da ce c2 ee 60 94 15 9f 88 1e 39 a6 64 8f ......`.....9.d. 00:22:29.543 000000b0 75 15 d6 68 19 e6 ef 2b c3 b3 27 6f 25 99 92 17 u..h...+..'o%... 00:22:29.543 000000c0 a6 bf b6 ca 46 7a 6f ef fa 4c 26 49 c9 66 15 1d ....Fzo..L&I.f.. 00:22:29.543 000000d0 b5 c7 02 03 8f 77 a9 41 57 c7 61 3c 4e 59 1d 7c .....w.AW.a+.f..Y.. H_..C 00:22:29.543 00000010 47 0e e8 b7 5a ef 69 d0 44 f9 c6 f8 93 69 a4 b6 G...Z.i.D....i.. 00:22:29.543 00000020 df f3 ee 94 5e b4 db 8f 4c c8 da 41 35 ed 89 a3 ....^...L..A5... 00:22:29.543 00000030 96 22 33 af b0 3a 0f e5 4d c0 cd e2 a9 0d 1e fd ."3..:..M....... 00:22:29.543 00000040 e0 e8 f3 45 17 2c 01 b6 10 4a c3 61 63 5b d4 2a ...E.,...J.ac[.* 00:22:29.543 00000050 2a f3 0d e6 54 40 da c6 d8 7b 9c 30 79 a2 94 9a *...T@...{.0y... 00:22:29.543 00000060 d6 ca 19 43 e1 32 8e aa 32 2d 19 61 a4 0b 2e 01 ...C.2..2-.a.... 00:22:29.543 00000070 a6 d5 42 81 66 d6 f4 d7 31 60 81 1e 6c 16 7c 9c ..B.f...1`..l.|. 00:22:29.543 00000080 7f 86 35 59 1d 25 c8 eb bd f6 1a 02 fe 5f 29 bf ..5Y.%......._). 00:22:29.543 00000090 70 54 41 5e 9f 5f 87 76 eb 87 d4 6f f2 1d e6 bb pTA^._.v...o.... 00:22:29.543 000000a0 09 96 a6 b2 fb f0 5f 93 a3 f0 d2 64 ac 9a 15 37 ......_....d...7 00:22:29.543 000000b0 fd 81 02 9a 1b ae 65 38 bc b9 36 2b 41 e6 40 3e ......e8..6+A.@> 00:22:29.543 000000c0 94 a3 d8 f4 3c 78 07 cf 8f 0c 2a 61 0e fc 4c 01 ....Y..F.H....... 00:22:29.543 00000150 1c cd 6d b3 ef a9 d8 ff 25 aa f2 b9 ba 6d 89 39 ..m.....%....m.9 00:22:29.543 00000160 0d 6a 71 f7 70 60 72 8b fc 60 05 b8 b1 21 bc a9 .jq.p`r..`...!.. 00:22:29.543 00000170 17 bf 0f de 4a 52 f7 67 23 1c 4d 10 bc b0 00 cc ....JR.g#.M..... 00:22:29.543 00000180 8c 46 f5 df 6f 84 89 01 dd e3 44 0b d6 57 0f 4d .F..o.....D..W.M 00:22:29.543 00000190 47 e5 cc 7c 48 3b c5 cb 2f 6c d1 29 4f bc 6d f1 G..|H;../l.)O.m. 00:22:29.543 000001a0 90 d4 66 2c bf aa 90 f5 71 4e 84 5f 98 cb ee 6d ..f,....qN._...m 00:22:29.543 000001b0 91 ae 28 20 75 da fc c4 b2 62 18 16 9b fd 59 6f ..( u....b....Yo 00:22:29.543 000001c0 69 c0 e2 9a ab fc c0 75 f3 71 ef a1 23 10 d6 92 i......u.q..#... 00:22:29.543 000001d0 e9 15 64 05 d4 70 8c 6e af a3 eb 76 5d 13 06 20 ..d..p.n...v].. 00:22:29.543 000001e0 f6 54 af 45 17 db fa a8 36 ef 94 fe 25 9d 7c 3d .T.E....6...%.|= 00:22:29.543 000001f0 14 c0 bb 71 b8 db 5c ae 6f f5 b3 1c c2 20 dc 4b ...q..\.o.... .K 00:22:29.543 [2024-09-27 13:27:23.612449] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=3, dhgroup=3, seq=3775755295, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.543 [2024-09-27 13:27:23.612774] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.543 [2024-09-27 13:27:23.636253] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.543 [2024-09-27 13:27:23.636617] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.543 [2024-09-27 13:27:23.636939] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.543 [2024-09-27 13:27:23.637091] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.543 [2024-09-27 13:27:23.688962] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.543 [2024-09-27 13:27:23.689238] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.543 [2024-09-27 13:27:23.689422] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 3 (ffdhe4096) 00:22:29.543 [2024-09-27 13:27:23.689740] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.543 [2024-09-27 13:27:23.689923] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.543 ctrlr pubkey: 00:22:29.543 00000000 88 6e e0 e2 aa 70 24 aa cc ef 69 8f 05 66 d7 6c .n...p$...i..f.l 00:22:29.543 00000010 3c b0 86 3e 83 27 d2 0b 66 5e c4 9f b0 66 7c ac <..>.'..f^...f|. 00:22:29.543 00000020 d8 7e f1 08 04 2f c1 6e 9a 4f 91 a6 97 74 e0 7f .~.../.n.O...t.. 00:22:29.543 00000030 0d 14 56 69 0e da 3b bf 16 ac df d1 69 63 56 bd ..Vi..;.....icV. 00:22:29.543 00000040 0f 04 e9 3d 61 8b d1 ac ec 30 e6 fb a0 8f 7c 59 ...=a....0....|Y 00:22:29.543 00000050 14 51 c2 9b 3f 1c 7c 12 9e 61 75 5d f5 c0 e5 aa .Q..?.|..au].... 00:22:29.543 00000060 46 1d 7a a6 c2 ca fa 48 3c 05 27 85 3b b6 96 93 F.z....H<.'.;... 00:22:29.543 00000070 24 7e be 29 89 bb 84 2e 0d 06 34 18 77 bd 4f f7 $~.)......4.w.O. 00:22:29.543 00000080 01 47 74 ad ed 3f af ab f1 c6 a7 1b a5 2a 01 ac .Gt..?.......*.. 00:22:29.543 00000090 84 e9 63 08 6a 04 f3 0f cb 0d e9 b2 09 56 12 db ..c.j........V.. 00:22:29.543 000000a0 55 93 4f 35 61 5e b8 1d 1d 86 ad c7 09 0b 74 cf U.O5a^........t. 00:22:29.543 000000b0 8f 07 72 7b 75 6f 43 40 74 6a 35 7f 4c c5 27 82 ..r{uoC@tj5.L.'. 00:22:29.543 000000c0 39 3a ed db 32 55 4e e4 2c b8 56 a5 b6 17 09 97 9:..2UN.,.V..... 00:22:29.543 000000d0 2d 9a e7 a9 32 12 bb 8b 0a 40 97 c9 0f 5e 09 a0 -...2....@...^.. 00:22:29.543 000000e0 b5 f5 06 aa 97 42 2b 1a 11 81 79 17 0d a9 34 75 .....B+...y...4u 00:22:29.543 000000f0 f7 7c 05 90 db e8 65 81 5d 99 94 32 64 16 15 30 .|....e.]..2d..0 00:22:29.543 00000100 28 e2 f5 00 c6 dd db b5 cb 7c 03 0e c2 68 86 ff (........|...h.. 00:22:29.543 00000110 0a d9 57 ac 9f a7 57 65 fa d2 54 81 89 67 33 62 ..W...We..T..g3b 00:22:29.543 00000120 8b 12 4c 80 7a ce 9e 28 9f b9 d5 eb ff 8d bb ec ..L.z..(........ 00:22:29.543 00000130 fa de d2 c3 2a e7 d4 46 51 af 52 c6 bf cd aa cb ....*..FQ.R..... 00:22:29.543 00000140 31 10 3d 1a 8e 63 7e 01 46 ee d8 c6 d2 76 11 09 1.=..c~.F....v.. 00:22:29.543 00000150 2f c2 1f 8c 7a 76 c7 64 64 46 2a 84 71 6d 0d 4d /...zv.ddF*.qm.M 00:22:29.543 00000160 d6 0b f4 98 8c 43 53 43 37 94 05 6e 79 4f c6 15 .....CSC7..nyO.. 00:22:29.543 00000170 d7 71 90 6c 18 78 b2 bb dd 9d 0b 96 36 39 bf f9 .q.l.x......69.. 00:22:29.543 00000180 e2 77 c7 ea 75 67 7c 9b 63 50 b0 c1 98 4f 14 f5 .w..ug|.cP...O.. 00:22:29.543 00000190 ed c9 e7 11 5c e6 34 be 9d 87 9e 5d de cc 86 70 ....\.4....]...p 00:22:29.543 000001a0 f5 88 00 7d 2a 3c e9 e4 2b 65 1e 0a 16 e1 d4 66 ...}*<..+e.....f 00:22:29.543 000001b0 47 1f ae 6e ac 2c 90 ed 42 3f b6 08 2a 8b f0 aa G..n.,..B?..*... 00:22:29.543 000001c0 ef 3a e4 a3 28 4c c7 84 ce ee ee 8e 4f cb 80 42 .:..(L......O..B 00:22:29.544 000001d0 52 7b 45 4d 5a 39 48 98 d7 a8 10 97 0e c8 0e 17 R{EMZ9H......... 00:22:29.544 000001e0 28 c5 45 65 1d 6c 6c ec 77 05 20 5b 38 a0 5f 77 (.Ee.ll.w. [8._w 00:22:29.544 000001f0 74 b9 28 c7 5b d5 11 33 2d 32 52 23 17 a9 81 ad t.(.[..3-2R#.... 00:22:29.544 host pubkey: 00:22:29.544 00000000 72 55 fc 2c 40 bd 13 78 93 6f ca 42 53 2a de ed rU.,@..x.o.BS*.. 00:22:29.544 00000010 fe 20 45 5a d8 e3 cb 06 11 f6 9a 0a be b7 53 29 . EZ..........S) 00:22:29.544 00000020 7d c3 61 4d b1 9b 1a 1a 11 f8 3c 3f 59 bd 9e ef }.aM.......c|.1. 00:22:29.544 000000c0 ce ad 39 2b 03 a7 78 00 d4 09 b5 a3 e7 02 6d fa ..9+..x.......m. 00:22:29.544 000000d0 bf 6e 96 a5 64 a0 d1 6f a3 5f 62 31 85 02 44 3e .n..d..o._b1..D> 00:22:29.544 000000e0 57 99 08 2e a3 8a e8 a4 68 63 15 45 d3 9d c5 2c W.......hc.E..., 00:22:29.544 000000f0 4d ff 1d af 81 33 ca 0a ab 4b 4d 37 31 d7 85 76 M....3...KM71..v 00:22:29.544 00000100 82 3c d9 88 81 95 71 d4 84 15 a6 c1 b5 d4 da 4b .<....q........K 00:22:29.544 00000110 89 64 51 5c df 20 45 e4 e9 98 11 ec 21 91 fa 16 .dQ\. E.....!... 00:22:29.544 00000120 88 f0 02 10 75 ec f8 11 f9 1f 7f c5 82 d4 c3 13 ....u........... 00:22:29.544 00000130 f5 88 15 bd f5 e8 9f e9 9a 6c a2 e5 92 f9 d1 5f .........l....._ 00:22:29.544 00000140 b7 ab da c9 2e fb 41 c2 ce 93 ba 17 4d 14 97 48 ......A.....M..H 00:22:29.544 00000150 03 e8 7f 9f 10 4b 91 3a cd f2 be b6 9f b8 c5 7f .....K.:........ 00:22:29.544 00000160 a0 9c 51 37 a8 80 0c c3 43 95 fb bd 3a 41 f9 d2 ..Q7....C...:A.. 00:22:29.544 00000170 88 35 57 9f b6 de cd 3d 01 74 e1 82 33 eb 51 81 .5W....=.t..3.Q. 00:22:29.544 00000180 2b cc 62 ac e8 b3 d1 56 bf 64 6a c2 98 96 77 d6 +.b....V.dj...w. 00:22:29.544 00000190 49 27 2f fe bc 0c b4 bb 55 04 fa c2 d2 0f 3e 9d I'/.....U.....>. 00:22:29.544 000001a0 18 96 09 1f 8b 3b 61 9b b0 76 a4 6c bc d7 0c c9 .....;a..v.l.... 00:22:29.544 000001b0 27 d9 c5 de 66 ff b4 52 1a 37 e8 fb 22 9d e8 eb '...f..R.7.."... 00:22:29.544 000001c0 e6 0c 14 df b9 35 15 93 fe bf c5 9e 09 2d 53 28 .....5.......-S( 00:22:29.544 000001d0 ba fd 8f 0f 33 dd 68 64 6d e7 e5 be 96 93 4f 11 ....3.hdm.....O. 00:22:29.544 000001e0 99 80 0d a1 85 aa fb 75 1e be 01 57 b9 4b 33 bf .......u...W.K3. 00:22:29.544 000001f0 5b 56 71 f5 06 fd 94 e0 cf 03 ff 27 ce dd 1f 72 [Vq........'...r 00:22:29.544 dh secret: 00:22:29.544 00000000 98 60 99 8b e8 d7 44 78 c0 8c 7f ef a8 2c 5f f2 .`....Dx.....,_. 00:22:29.544 00000010 17 ee 8a 83 a5 1e c6 f2 7b e4 c6 45 6b e7 00 25 ........{..Ek..% 00:22:29.544 00000020 e3 6b 81 fc 47 c5 74 62 ea 03 96 32 d4 26 f1 1f .k..G.tb...2.&.. 00:22:29.544 00000030 06 fd 7b ed 4c c0 14 0a 60 97 01 b0 3a b9 1f 8f ..{.L...`...:... 00:22:29.544 00000040 4a ae 1c ce 0e 26 3a 8b d8 5b 42 4c 95 66 bb f3 J....&:..[BL.f.. 00:22:29.544 00000050 d1 99 87 96 eb 7a f8 e7 ee f2 81 4a 5f 59 5e 77 .....z.....J_Y^w 00:22:29.544 00000060 af 19 c3 a7 8a b7 17 14 62 b9 bd 7d 98 f0 59 95 ........b..}..Y. 00:22:29.544 00000070 f6 14 63 05 27 9a 5d 14 50 fe 34 23 2e 07 0b 8d ..c.'.].P.4#.... 00:22:29.544 00000080 b7 d8 f0 f1 8f 66 ad 8a 10 71 c6 0e e0 c2 0f f7 .....f...q...... 00:22:29.544 00000090 a2 4c 08 9d 3a fa 6d 9c 6b 46 33 26 00 69 2c d3 .L..:.m.kF3&.i,. 00:22:29.544 000000a0 24 23 0b a2 99 b0 75 b5 8b 51 2d e0 a4 10 a1 6f $#....u..Q-....o 00:22:29.544 000000b0 82 37 82 dc 0e 34 ff 64 87 e7 88 21 95 c7 15 71 .7...4.d...!...q 00:22:29.544 000000c0 b7 be af 88 2b 44 f1 54 18 4b f8 24 5a 7d 35 c8 ....+D.T.K.$Z}5. 00:22:29.544 000000d0 d9 26 91 58 ca db 8f 62 11 e7 8c ef 3e 70 89 c9 .&.X...b....>p.. 00:22:29.544 000000e0 dd 7a fc 16 ff bf 44 d2 69 87 b8 9c 0c b8 5d 79 .z....D.i.....]y 00:22:29.544 000000f0 89 6b b6 99 31 96 ce 67 3d 03 74 ad 10 e6 0d b2 .k..1..g=.t..... 00:22:29.544 00000100 89 3a 7e 47 b3 76 6d 91 95 6f 4c 10 fc 21 46 b4 .:~G.vm..oL..!F. 00:22:29.544 00000110 34 e0 ef 6e 1f d8 de 1c b6 0d 66 21 f0 84 03 b8 4..n......f!.... 00:22:29.544 00000120 10 28 8b fd 35 43 85 03 bd 98 b8 76 13 3a 1a 1d .(..5C.....v.:.. 00:22:29.544 00000130 30 51 66 d8 44 c6 8b d2 33 93 02 94 b6 cb dd 21 0Qf.D...3......! 00:22:29.544 00000140 45 a9 39 49 11 43 8a 2a 70 b4 fb a7 86 cc 51 a9 E.9I.C.*p.....Q. 00:22:29.544 00000150 ae 2e 3e 37 61 f6 6a 78 04 1d a8 85 07 d1 b4 8e ..>7a.jx........ 00:22:29.544 00000160 4c fc 7e 24 d9 82 3d b0 ea 84 a6 9d 39 8f 95 69 L.~$..=.....9..i 00:22:29.544 00000170 a0 c6 6f e0 b3 a9 6e fc d2 c3 3d f2 9a c2 2d 8d ..o...n...=...-. 00:22:29.544 00000180 06 69 e9 b4 60 ae dc ca e1 b4 93 fc f3 35 e2 db .i..`........5.. 00:22:29.544 00000190 09 98 7f 19 ce 23 80 05 84 af 80 58 ac 82 6d 5d .....#.....X..m] 00:22:29.544 000001a0 65 17 3e 0f 53 ac 9a fb 90 45 f5 fe 04 e9 53 51 e.>.S....E....SQ 00:22:29.544 000001b0 08 37 15 8b 24 d8 80 4a 41 b5 a6 cf ea 4a 58 83 .7..$..JA....JX. 00:22:29.544 000001c0 8c d9 9c 4b 9f d9 51 bc 3a 42 ca 2a 3a 67 e0 e1 ...K..Q.:B.*:g.. 00:22:29.544 000001d0 24 3a 37 57 0c b0 ed a1 fd 6e b3 90 88 13 c8 98 $:7W.....n...... 00:22:29.544 000001e0 fb 84 8a 2f 91 95 68 aa 0f 07 ad 99 40 39 ea 95 .../..h.....@9.. 00:22:29.544 000001f0 52 89 54 5e c4 6b 68 61 c5 e1 17 46 19 c8 99 ba R.T^.kha...F.... 00:22:29.544 [2024-09-27 13:27:23.721933] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key1, hash=3, dhgroup=3, seq=3775755296, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.544 [2024-09-27 13:27:23.722233] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.544 [2024-09-27 13:27:23.745419] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.544 [2024-09-27 13:27:23.745760] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.544 [2024-09-27 13:27:23.746005] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.544 [2024-09-27 13:27:23.746243] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.544 [2024-09-27 13:27:23.854002] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.544 [2024-09-27 13:27:23.854316] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.544 [2024-09-27 13:27:23.854453] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 3 (ffdhe4096) 00:22:29.544 [2024-09-27 13:27:23.854670] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.544 [2024-09-27 13:27:23.854961] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.544 ctrlr pubkey: 00:22:29.544 00000000 97 84 6b 10 19 78 21 c8 cb aa 20 a7 d4 dc 0b 18 ..k..x!... ..... 00:22:29.544 00000010 e4 75 a7 2f ba b3 c8 53 8e ce 8b 00 c1 e0 2b 9b .u./...S......+. 00:22:29.544 00000020 4d 27 ad f3 6c 6a 84 92 88 07 42 77 40 f6 ea ed M'..lj....Bw@... 00:22:29.544 00000030 e9 2b bd 4e 65 d5 76 06 8e 5f da 97 3e fe 20 3b .+.Ne.v.._..>. ; 00:22:29.544 00000040 40 76 42 de d4 4a 8f 0a 84 ea d0 16 24 ad 27 af @vB..J......$.'. 00:22:29.544 00000050 96 b3 14 4d a0 9d 84 0f b9 15 52 f2 84 f0 c7 77 ...M......R....w 00:22:29.544 00000060 29 cb 72 e0 2c 33 c2 75 7c 3c b4 0d 6a 93 55 36 ).r.,3.u|<..j.U6 00:22:29.544 00000070 7e 14 4f 12 e6 fd ef 71 d8 bb 1c d0 64 31 0b ba ~.O....q....d1.. 00:22:29.544 00000080 e0 6f 60 e5 8f 95 bc e8 34 fb df f3 84 0c 8f 8a .o`.....4....... 00:22:29.544 00000090 a0 70 df db 9d f2 98 fb 0e d6 4d b5 7a b9 c8 a3 .p........M.z... 00:22:29.544 000000a0 e9 06 ab 3d de bd 0b 6c 9a 60 2d 83 f9 9f ea cc ...=...l.`-..... 00:22:29.544 000000b0 f4 d0 f8 fe 4a 89 e8 cb ef 50 6c 0d c5 28 81 61 ....J....Pl..(.a 00:22:29.544 000000c0 19 62 c5 1c 4a 80 3e 6a ae 4f be e4 9e cc 17 2a .b..J.>j.O.....* 00:22:29.544 000000d0 17 d8 d4 3c 6b 5b ff b6 84 fc 43 d2 2e 05 0b 0e ... 00:22:29.544 00000030 97 9d f8 c4 26 10 a4 7f 53 6e ee d1 d3 fb d7 7b ....&...Sn.....{ 00:22:29.544 00000040 bf 7f 78 92 f9 50 d8 7a 07 75 96 b6 a3 e2 2a eb ..x..P.z.u....*. 00:22:29.544 00000050 84 39 b9 b6 3c e4 e4 b7 67 d1 62 05 26 64 a9 4e .9..<...g.b.&d.N 00:22:29.544 00000060 9f 0f 02 cf 57 04 94 78 04 09 c7 47 a6 b1 cf 2c ....W..x...G..., 00:22:29.544 00000070 0a c8 92 a5 e4 fa fd ae ba d4 f9 6e 12 cf db 4a ...........n...J 00:22:29.544 00000080 84 75 e3 e4 9f a8 45 e3 b2 88 c4 7b 3f 02 b0 95 .u....E....{?... 00:22:29.544 00000090 87 50 6d 77 2b d9 80 c1 ca 81 52 73 0e 5c d1 6a .Pmw+.....Rs.\.j 00:22:29.544 000000a0 60 90 79 79 ca aa 0d 96 87 81 19 65 40 db 91 6b `.yy.......e@..k 00:22:29.544 000000b0 c2 1c 63 b3 b2 86 a3 a1 cf 87 74 3b 45 79 e7 71 ..c.......t;Ey.q 00:22:29.544 000000c0 9d a6 d2 b8 25 46 1c 81 c8 68 24 3f d2 e5 19 3c ....%F...h$?...< 00:22:29.544 000000d0 41 7b 38 c4 4a 5a 6b 87 01 81 af 95 52 8c 4d 73 A{8.JZk.....R.Ms 00:22:29.544 000000e0 b5 75 4f bd 30 9e 30 1b 91 2d 88 4c 2f f5 70 dc .uO.0.0..-.L/.p. 00:22:29.544 000000f0 09 68 f8 14 77 68 ab 59 97 cd aa ec b3 e8 d7 8c .h..wh.Y........ 00:22:29.544 00000100 19 47 86 6c 1d 30 a8 cb 77 bd 3b 22 dc 28 dd 31 .G.l.0..w.;".(.1 00:22:29.544 00000110 26 1e 42 72 57 f8 96 f4 5a ba 2d a6 35 a8 81 a9 &.BrW...Z.-.5... 00:22:29.544 00000120 e4 21 17 a3 36 8c ba 12 99 e4 58 38 96 28 95 aa .!..6.....X8.(.. 00:22:29.544 00000130 95 0a e5 e2 99 a1 57 ac cf ff cd 00 57 c6 e9 fb ......W.....W... 00:22:29.544 00000140 17 57 b1 3a bf a3 b8 82 fb 97 4b e3 ee 57 aa 56 .W.:......K..W.V 00:22:29.544 00000150 15 d5 3a b0 e1 04 50 83 9d 44 b9 6c 03 f5 1e a2 ..:...P..D.l.... 00:22:29.544 00000160 f2 b6 ec f9 23 f4 61 fe 88 a2 50 3a 47 dd 82 fe ....#.a...P:G... 00:22:29.544 00000170 bc 23 cf 4e cc a4 57 60 4d d6 a6 11 65 0d b2 d6 .#.N..W`M...e... 00:22:29.544 00000180 2d 1a bf a6 d5 52 20 ae f3 95 15 b3 de 78 c1 4a -....R ......x.J 00:22:29.544 00000190 53 73 48 b4 d0 36 25 b2 07 22 3c 46 89 23 2d 20 SsH..6%.."..2......}U.. 00:22:29.545 00000020 fd a6 67 ab 4c 49 bd c6 53 9b a1 34 9f 45 02 8e ..g.LI..S..4.E.. 00:22:29.545 00000030 6f 7b 30 8f c1 e6 6e c1 7a ad 63 ce 7c 88 6a db o{0...n.z.c.|.j. 00:22:29.545 00000040 4e 27 ba b4 79 d5 b4 d0 d7 54 ea eb a3 59 ef 1c N'..y....T...Y.. 00:22:29.545 00000050 f0 0e d2 7f 80 47 d0 22 88 6c 4a ca 0b cc 40 91 .....G.".lJ...@. 00:22:29.545 00000060 7b be 3c a0 e9 9a 6d 75 60 f5 a6 76 7d 03 1f 18 {.<...mu`..v}... 00:22:29.545 00000070 bc 5d 1d 24 8a 1c 28 e7 3a 5b 3f 98 05 cf 34 45 .].$..(.:[?...4E 00:22:29.545 00000080 a2 85 23 12 cc 83 9a ba b4 1c ba 86 a6 ed e1 79 ..#............y 00:22:29.545 00000090 e9 1f 64 e9 90 e6 fb 11 f8 64 a3 fc b7 b3 a0 15 ..d......d...... 00:22:29.545 000000a0 cf 8a 84 82 24 d0 b6 f0 08 13 8a 2d 98 d1 8a 3c ....$......-...< 00:22:29.545 000000b0 91 c3 c0 3a b7 69 67 8c 19 83 ac f7 d4 98 40 37 ...:.ig.......@7 00:22:29.545 000000c0 38 18 fc 21 57 bf 2f a2 ab f7 96 d6 cf b4 9c 88 8..!W./......... 00:22:29.545 000000d0 be 5c 6b b2 77 e2 23 64 14 1d 43 d7 0b 81 6b 85 .\k.w.#d..C...k. 00:22:29.545 000000e0 9b c3 a3 53 b5 3a 08 46 7c 96 f0 9b 87 c3 9b 1d ...S.:.F|....... 00:22:29.545 000000f0 16 35 00 b7 d6 25 8f d9 29 e0 43 f5 43 f3 04 7d .5...%..).C.C..} 00:22:29.545 00000100 2b 71 12 31 5b 7f 20 35 8e 40 72 b4 cd 7f d3 a4 +q.1[. 5.@r..... 00:22:29.545 00000110 ab 48 8d e7 9b 1e 0b 1d 19 c6 9d a3 0d d7 6d c4 .H............m. 00:22:29.545 00000120 e8 f3 f7 17 d6 84 00 15 7b bc 6d bc dd 97 a5 6a ........{.m....j 00:22:29.545 00000130 be 69 a5 35 db 4e 78 ee a9 15 f7 c6 5e 18 eb 2a .i.5.Nx.....^..* 00:22:29.545 00000140 ee 6c 92 25 b5 33 35 49 02 31 7e 35 f6 c6 dc 77 .l.%.35I.1~5...w 00:22:29.545 00000150 e8 04 30 00 a8 f2 f8 96 46 5f 40 1d 85 e8 f1 aa ..0.....F_@..... 00:22:29.545 00000160 bc 3f 05 92 31 a0 59 f5 cd af d7 33 cd 94 14 fb .?..1.Y....3.... 00:22:29.545 00000170 af 17 e4 20 75 e1 33 a6 39 41 1e 2d 96 53 e2 0c ... u.3.9A.-.S.. 00:22:29.545 00000180 9d 41 fc 8e 64 9b a4 76 f2 4f 44 75 a8 91 b0 11 .A..d..v.ODu.... 00:22:29.545 00000190 1d e0 40 87 e3 e7 a9 c6 13 b9 e3 8a fc 22 b3 cf ..@..........".. 00:22:29.545 000001a0 be 5c 42 61 5e 6b c3 75 47 e6 f4 b3 77 6c 39 34 .\Ba^k.uG...wl94 00:22:29.545 000001b0 7d 1c 14 88 49 98 00 63 f7 c0 95 57 10 c9 9f 98 }...I..c...W.... 00:22:29.545 000001c0 3f fa 87 77 71 a8 78 23 fa 81 9d 28 52 56 2a b5 ?..wq.x#...(RV*. 00:22:29.545 000001d0 dc 6e 99 96 48 03 28 81 e3 fa 04 96 d2 d9 d2 25 .n..H.(........% 00:22:29.545 000001e0 55 8b 07 2d 07 15 d5 d3 72 5e 18 cc 22 28 31 b4 U..-....r^.."(1. 00:22:29.545 000001f0 9a 36 94 ad ad ab 9c a8 3a b9 22 e0 ae 41 09 7a .6......:."..A.z 00:22:29.545 [2024-09-27 13:27:23.890244] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=3, dhgroup=3, seq=3775755297, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.545 [2024-09-27 13:27:23.890891] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.545 [2024-09-27 13:27:23.919203] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.545 [2024-09-27 13:27:23.919582] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.545 [2024-09-27 13:27:23.919887] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.545 [2024-09-27 13:27:23.920128] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.545 [2024-09-27 13:27:23.971120] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.545 [2024-09-27 13:27:23.971343] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.545 [2024-09-27 13:27:23.971506] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 3 (ffdhe4096) 00:22:29.545 [2024-09-27 13:27:23.971756] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.545 [2024-09-27 13:27:23.971987] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.545 ctrlr pubkey: 00:22:29.545 00000000 97 84 6b 10 19 78 21 c8 cb aa 20 a7 d4 dc 0b 18 ..k..x!... ..... 00:22:29.545 00000010 e4 75 a7 2f ba b3 c8 53 8e ce 8b 00 c1 e0 2b 9b .u./...S......+. 00:22:29.545 00000020 4d 27 ad f3 6c 6a 84 92 88 07 42 77 40 f6 ea ed M'..lj....Bw@... 00:22:29.545 00000030 e9 2b bd 4e 65 d5 76 06 8e 5f da 97 3e fe 20 3b .+.Ne.v.._..>. ; 00:22:29.545 00000040 40 76 42 de d4 4a 8f 0a 84 ea d0 16 24 ad 27 af @vB..J......$.'. 00:22:29.545 00000050 96 b3 14 4d a0 9d 84 0f b9 15 52 f2 84 f0 c7 77 ...M......R....w 00:22:29.545 00000060 29 cb 72 e0 2c 33 c2 75 7c 3c b4 0d 6a 93 55 36 ).r.,3.u|<..j.U6 00:22:29.545 00000070 7e 14 4f 12 e6 fd ef 71 d8 bb 1c d0 64 31 0b ba ~.O....q....d1.. 00:22:29.545 00000080 e0 6f 60 e5 8f 95 bc e8 34 fb df f3 84 0c 8f 8a .o`.....4....... 00:22:29.545 00000090 a0 70 df db 9d f2 98 fb 0e d6 4d b5 7a b9 c8 a3 .p........M.z... 00:22:29.545 000000a0 e9 06 ab 3d de bd 0b 6c 9a 60 2d 83 f9 9f ea cc ...=...l.`-..... 00:22:29.545 000000b0 f4 d0 f8 fe 4a 89 e8 cb ef 50 6c 0d c5 28 81 61 ....J....Pl..(.a 00:22:29.545 000000c0 19 62 c5 1c 4a 80 3e 6a ae 4f be e4 9e cc 17 2a .b..J.>j.O.....* 00:22:29.545 000000d0 17 d8 d4 3c 6b 5b ff b6 84 fc 43 d2 2e 05 0b 0e ... 00:22:29.545 00000160 00 c7 e4 72 d3 f2 30 47 50 ad ab 43 8b 23 6d fe ...r..0GP..C.#m. 00:22:29.545 00000170 37 5e 5b ca 89 eb f0 9f a9 78 8c 36 48 80 74 47 7^[......x.6H.tG 00:22:29.545 00000180 fe 09 da 98 58 c9 44 d4 62 9c f2 dc 29 f3 e1 ab ....X.D.b...)... 00:22:29.545 00000190 c1 ef a5 47 e7 a2 91 1d 56 a0 65 6d cb b7 8c c1 ...G....V.em.... 00:22:29.545 000001a0 27 48 26 1a 74 dd 30 eb 37 00 4c 06 b5 0f ed a7 'H&.t.0.7.L..... 00:22:29.545 000001b0 25 b9 ad 83 ba 90 32 4b 26 b8 d3 a6 94 fd 5d 0c %.....2K&.....]. 00:22:29.545 000001c0 09 28 73 34 37 17 a0 6f 39 67 34 af 6e 06 fa bd .(s47..o9g4.n... 00:22:29.545 000001d0 3e c8 7d 12 72 0f c8 49 01 46 c4 21 f5 d4 15 1d >.}.r..I.F.!.... 00:22:29.545 000001e0 26 b3 69 68 66 d1 9c 36 3a ce 40 9b 76 a3 0c 44 &.ihf..6:.@.v..D 00:22:29.545 000001f0 4b b1 da 91 e4 b7 bd 20 1f f5 cf ea 04 ea b0 0b K...... ........ 00:22:29.545 [2024-09-27 13:27:23.998824] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key2, hash=3, dhgroup=3, seq=3775755298, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.545 [2024-09-27 13:27:23.999124] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.545 [2024-09-27 13:27:24.023753] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.545 [2024-09-27 13:27:24.024200] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.545 [2024-09-27 13:27:24.024382] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.545 [2024-09-27 13:27:24.024617] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.546 [2024-09-27 13:27:24.133088] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.546 [2024-09-27 13:27:24.133331] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.546 [2024-09-27 13:27:24.133571] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 3 (ffdhe4096) 00:22:29.546 [2024-09-27 13:27:24.133730] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.546 [2024-09-27 13:27:24.133949] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.546 ctrlr pubkey: 00:22:29.546 00000000 a2 47 e6 7e 52 f6 19 07 f4 da a3 ff a1 41 4a ac .G.~R........AJ. 00:22:29.546 00000010 19 f5 74 27 31 43 73 9f 5b a2 ed 98 ba 6a 50 0f ..t'1Cs.[....jP. 00:22:29.546 00000020 4a c1 92 fe 32 a6 68 81 88 97 46 f1 3c 82 44 fc J...2.h...F.<.D. 00:22:29.546 00000030 dd 46 b8 61 0d 6a 21 bd 87 26 c5 07 e3 7a 17 57 .F.a.j!..&...z.W 00:22:29.546 00000040 04 3c 96 3b 40 b4 73 fa 45 5f 96 f0 76 70 e7 a3 .<.;@.s.E_..vp.. 00:22:29.546 00000050 9e 67 8b 3b e7 22 aa 71 ea 8a f9 dc 7f 07 44 f7 .g.;.".q......D. 00:22:29.546 00000060 2e 3c d5 fc 96 72 4d da 02 c4 53 9a 2b bf 29 bf .<...rM...S.+.). 00:22:29.546 00000070 8e 29 4c cd 10 66 41 6a c5 ce ae 11 3c 8a eb 7f .)L..fAj....<... 00:22:29.546 00000080 c7 d4 6c 0f 4c 29 cf 61 ba 69 3f 51 95 dc b3 31 ..l.L).a.i?Q...1 00:22:29.546 00000090 f0 6c 53 6c d6 9e ca ca a3 ef dc 95 1c 56 3f fb .lSl.........V?. 00:22:29.546 000000a0 d3 3f 7d 83 23 3c d2 bd 94 64 05 37 9b 17 37 73 .?}.#<...d.7..7s 00:22:29.546 000000b0 35 14 83 94 03 ee 66 e7 99 20 e7 29 f9 1d e5 84 5.....f.. .).... 00:22:29.546 000000c0 9c 53 f2 6a ab 1e 1a f3 07 52 bf 09 b9 6e 32 aa .S.j.....R...n2. 00:22:29.546 000000d0 84 7d 4c c5 69 1b 3c 8c d1 c2 6a 11 2b cf 21 9a .}L.i.<...j.+.!. 00:22:29.546 000000e0 dc 0f 90 79 0b 8a 56 99 6d 50 3e f2 0b 31 a6 06 ...y..V.mP>..1.. 00:22:29.546 000000f0 05 a7 4c 41 ee 99 63 60 16 d3 5e 01 10 0a 4b 7b ..LA..c`..^...K{ 00:22:29.546 00000100 ef 1e 4c 12 85 24 af 9c ea 36 a6 9a 9c ab 3f 41 ..L..$...6....?A 00:22:29.546 00000110 14 a7 20 89 e2 66 53 60 13 bf 27 a1 3a 2e e3 ff .. ..fS`..'.:... 00:22:29.546 00000120 97 87 b5 b8 91 13 06 f5 ff 54 93 e2 e1 39 90 06 .........T...9.. 00:22:29.546 00000130 af 3e 03 a4 2c f2 71 64 29 89 d5 33 c1 8c a6 7f .>..,.qd)..3.... 00:22:29.546 00000140 ce 70 27 4b 98 82 62 d1 fb a2 4f 18 4f 90 94 81 .p'K..b...O.O... 00:22:29.546 00000150 e1 b4 fc 06 fb 88 6f 2b f3 34 63 9a de e5 aa 84 ......o+.4c..... 00:22:29.546 00000160 80 a1 0e e4 07 17 a8 e4 47 b1 bb c1 5a 62 fa af ........G...Zb.. 00:22:29.546 00000170 dc 7b 03 4e 31 3b 05 89 5d 8f 69 e5 70 3d 90 2d .{.N1;..].i.p=.- 00:22:29.546 00000180 66 8e e4 87 5b 00 cf ea 6a 08 41 3f e9 82 7d 99 f...[...j.A?..}. 00:22:29.546 00000190 ed e1 d0 e9 3c a6 45 06 7e a2 3d 45 6f 3e 10 14 ....<.E.~.=Eo>.. 00:22:29.546 000001a0 89 83 ee f9 87 2c b9 64 dd 40 81 4f 61 f4 35 fb .....,.d.@.Oa.5. 00:22:29.546 000001b0 7b 5a ce 40 4a 7d 02 ca 16 f6 29 4d 7d 3b ff bd {Z.@J}....)M};.. 00:22:29.546 000001c0 b0 d3 0f 74 d3 8c 90 fa 2f fa 70 f7 32 09 55 61 ...t..../.p.2.Ua 00:22:29.546 000001d0 cf 79 93 fd 79 10 d8 37 5e db 47 41 c3 67 ee 7d .y..y..7^.GA.g.} 00:22:29.546 000001e0 10 7f 6c 82 49 e7 1c bb 69 c2 ba 19 9d fc a9 2d ..l.I...i......- 00:22:29.546 000001f0 fb 28 dc 62 3d b3 69 f4 87 27 f2 19 5b ec 4a df .(.b=.i..'..[.J. 00:22:29.546 host pubkey: 00:22:29.546 00000000 1b 22 f6 1a 41 ca 2a 5c 7f 50 8b 81 5d c3 b3 e1 ."..A.*\.P..]... 00:22:29.546 00000010 da 93 d4 ee 8e b7 6e 4a a5 7f a7 d7 8c 3b 34 e4 ......nJ.....;4. 00:22:29.546 00000020 a5 9f 6c 22 06 f2 9c b6 15 e6 62 72 ff cb 80 d1 ..l"......br.... 00:22:29.546 00000030 31 fe 15 4a cb 28 58 9c f3 9b 85 b2 40 11 0d 77 1..J.(X.....@..w 00:22:29.546 00000040 7d 7b 50 f5 8c 34 8b dd b1 a1 fd 0c 21 da 2d c1 }{P..4......!.-. 00:22:29.546 00000050 7f 62 6c 5c ca 6b 25 3a fa 1a b6 fe 4e 0c 0d 46 .bl\.k%:....N..F 00:22:29.546 00000060 24 ed a4 d1 15 4d c0 08 ed 10 d2 e4 86 0c 02 67 $....M.........g 00:22:29.546 00000070 6f b4 8b 63 75 82 91 6e ba 18 2e a8 50 eb e9 fc o..cu..n....P... 00:22:29.546 00000080 b8 00 06 b0 61 a6 89 1f f8 65 05 76 1e c0 01 43 ....a....e.v...C 00:22:29.546 00000090 f8 4f 77 28 78 a1 f9 22 5c af ae 2d b3 6c e6 f8 .Ow(x.."\..-.l.. 00:22:29.546 000000a0 c2 d7 cf 40 4b 7b da 6b e4 1e e2 52 eb 28 85 5a ...@K{.k...R.(.Z 00:22:29.546 000000b0 85 fc 47 49 45 3a b4 f3 22 81 0f a9 10 a1 a7 f1 ..GIE:.."....... 00:22:29.546 000000c0 3e 32 3f 38 58 25 57 26 bb 73 91 e5 26 62 b7 65 >2?8X%W&.s..&b.e 00:22:29.546 000000d0 f5 aa aa 0e 57 19 e5 93 46 70 00 4b 73 fd ef 0f ....W...Fp.Ks... 00:22:29.546 000000e0 e2 c1 d9 de 35 66 7f 08 85 f0 72 5c fc 8b b7 8c ....5f....r\.... 00:22:29.546 000000f0 b7 c7 5b 8d e0 38 48 01 d8 e3 5f 75 3f 0e 88 05 ..[..8H..._u?... 00:22:29.546 00000100 6d b6 a4 50 4c f0 3a c8 87 8b 93 82 b8 f7 cf 8d m..PL.:......... 00:22:29.546 00000110 af 67 37 e3 78 b8 9b f3 6f 51 4c eb 0b 37 9f 13 .g7.x...oQL..7.. 00:22:29.546 00000120 00 6c 8b 95 25 41 34 24 a2 ca 7e b2 f7 45 7b bf .l..%A4$..~..E{. 00:22:29.546 00000130 8e 7b f8 59 12 9d 05 77 21 ef b1 4d 45 08 12 42 .{.Y...w!..ME..B 00:22:29.546 00000140 f5 f2 88 04 46 7c 4d ba 64 88 19 39 63 21 b3 f3 ....F|M.d..9c!.. 00:22:29.546 00000150 ff 79 cc 5e 14 26 e3 53 45 f3 7b 0a 2a 71 85 0c .y.^.&.SE.{.*q.. 00:22:29.546 00000160 40 e8 97 93 fb a2 ef e1 07 a6 63 90 3b bb 03 1a @.........c.;... 00:22:29.546 00000170 57 21 61 a6 bf 79 af fa d7 71 7c f6 cc d0 44 00 W!a..y...q|...D. 00:22:29.546 00000180 a4 31 6d 02 a2 1e 85 b8 3d 0c e6 f1 a6 eb 90 cb .1m.....=....... 00:22:29.546 00000190 ec 53 b9 f1 50 ee 3b e9 9d 44 b2 6e 4c a2 68 c1 .S..P.;..D.nL.h. 00:22:29.546 000001a0 bb e8 a6 02 54 53 1c 9e f9 03 0a 5d 5b 81 63 58 ....TS.....][.cX 00:22:29.546 000001b0 cf 98 14 52 bb 8d 7f 45 da 1a fe 7e 18 c9 9d 88 ...R...E...~.... 00:22:29.546 000001c0 3c 79 f6 cc e3 9d 1b d4 2d 38 ba 32 f1 05 5d 38 .....d6*.4..6 00:22:29.546 000000f0 87 9e fc cb df 6f 4b 84 80 24 7c ba 72 e7 90 54 .....oK..$|.r..T 00:22:29.546 00000100 9b 94 5a 3e d8 a7 15 8c 48 5e a6 2e da 48 f0 08 ..Z>....H^...H.. 00:22:29.546 00000110 4c 29 f9 7a 4c 9c e8 45 63 c7 f3 9e f1 71 19 bb L).zL..Ec....q.. 00:22:29.546 00000120 a9 cb dd fb 84 17 e7 66 1b 8c 9e 2d ae c4 67 5d .......f...-..g] 00:22:29.546 00000130 cb 4a 80 d1 b3 03 92 06 67 16 16 af ae 78 9a 9f .J......g....x.. 00:22:29.546 00000140 77 94 76 90 f1 bb 89 97 f0 2c bb 55 02 68 fd f5 w.v......,.U.h.. 00:22:29.546 00000150 af e3 40 f7 c7 11 69 a5 4b 85 59 35 81 4c 91 c1 ..@...i.K.Y5.L.. 00:22:29.546 00000160 02 44 3f d9 56 80 c7 cf 57 04 eb 70 93 b7 36 f8 .D?.V...W..p..6. 00:22:29.546 00000170 fa ba 9c 61 17 0c d7 15 64 65 1d d0 08 3f 58 41 ...a....de...?XA 00:22:29.546 00000180 72 c9 60 f8 03 34 49 3b 48 63 aa e0 f8 be d7 bd r.`..4I;Hc...... 00:22:29.546 00000190 73 52 8a ea e8 9d 40 03 7c 3a a9 a3 52 2d 42 d4 sR....@.|:..R-B. 00:22:29.546 000001a0 57 e0 95 2a 2d f2 5d 64 4f cb f7 ff 42 c9 4c 87 W..*-.]dO...B.L. 00:22:29.546 000001b0 9b 1a 04 b4 94 09 54 20 2e bb 62 01 b2 9f ef 29 ......T ..b....) 00:22:29.546 000001c0 ba f1 84 ed 50 31 be 80 47 0e 6c e8 d1 4d 98 9e ....P1..G.l..M.. 00:22:29.546 000001d0 a8 28 bf 28 b2 33 9a 19 46 fd eb 7d 24 50 7e 63 .(.(.3..F..}$P~c 00:22:29.546 000001e0 02 db 3b 21 03 22 4e d5 05 11 43 63 52 89 81 c5 ..;!."N...CcR... 00:22:29.546 000001f0 f4 7c 15 56 69 bd 61 14 60 4d d7 60 ae ae d1 35 .|.Vi.a.`M.`...5 00:22:29.546 [2024-09-27 13:27:24.160796] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key3, hash=3, dhgroup=3, seq=3775755299, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.546 [2024-09-27 13:27:24.161099] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.546 [2024-09-27 13:27:24.186914] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.546 [2024-09-27 13:27:24.187253] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.546 [2024-09-27 13:27:24.187515] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.546 [2024-09-27 13:27:24.187752] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.546 [2024-09-27 13:27:24.239635] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.546 [2024-09-27 13:27:24.239944] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.546 [2024-09-27 13:27:24.240129] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 3 (ffdhe4096) 00:22:29.547 [2024-09-27 13:27:24.240310] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.547 [2024-09-27 13:27:24.240567] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.547 ctrlr pubkey: 00:22:29.547 00000000 a2 47 e6 7e 52 f6 19 07 f4 da a3 ff a1 41 4a ac .G.~R........AJ. 00:22:29.547 00000010 19 f5 74 27 31 43 73 9f 5b a2 ed 98 ba 6a 50 0f ..t'1Cs.[....jP. 00:22:29.547 00000020 4a c1 92 fe 32 a6 68 81 88 97 46 f1 3c 82 44 fc J...2.h...F.<.D. 00:22:29.547 00000030 dd 46 b8 61 0d 6a 21 bd 87 26 c5 07 e3 7a 17 57 .F.a.j!..&...z.W 00:22:29.547 00000040 04 3c 96 3b 40 b4 73 fa 45 5f 96 f0 76 70 e7 a3 .<.;@.s.E_..vp.. 00:22:29.547 00000050 9e 67 8b 3b e7 22 aa 71 ea 8a f9 dc 7f 07 44 f7 .g.;.".q......D. 00:22:29.547 00000060 2e 3c d5 fc 96 72 4d da 02 c4 53 9a 2b bf 29 bf .<...rM...S.+.). 00:22:29.547 00000070 8e 29 4c cd 10 66 41 6a c5 ce ae 11 3c 8a eb 7f .)L..fAj....<... 00:22:29.547 00000080 c7 d4 6c 0f 4c 29 cf 61 ba 69 3f 51 95 dc b3 31 ..l.L).a.i?Q...1 00:22:29.547 00000090 f0 6c 53 6c d6 9e ca ca a3 ef dc 95 1c 56 3f fb .lSl.........V?. 00:22:29.547 000000a0 d3 3f 7d 83 23 3c d2 bd 94 64 05 37 9b 17 37 73 .?}.#<...d.7..7s 00:22:29.547 000000b0 35 14 83 94 03 ee 66 e7 99 20 e7 29 f9 1d e5 84 5.....f.. .).... 00:22:29.547 000000c0 9c 53 f2 6a ab 1e 1a f3 07 52 bf 09 b9 6e 32 aa .S.j.....R...n2. 00:22:29.547 000000d0 84 7d 4c c5 69 1b 3c 8c d1 c2 6a 11 2b cf 21 9a .}L.i.<...j.+.!. 00:22:29.547 000000e0 dc 0f 90 79 0b 8a 56 99 6d 50 3e f2 0b 31 a6 06 ...y..V.mP>..1.. 00:22:29.547 000000f0 05 a7 4c 41 ee 99 63 60 16 d3 5e 01 10 0a 4b 7b ..LA..c`..^...K{ 00:22:29.547 00000100 ef 1e 4c 12 85 24 af 9c ea 36 a6 9a 9c ab 3f 41 ..L..$...6....?A 00:22:29.547 00000110 14 a7 20 89 e2 66 53 60 13 bf 27 a1 3a 2e e3 ff .. ..fS`..'.:... 00:22:29.547 00000120 97 87 b5 b8 91 13 06 f5 ff 54 93 e2 e1 39 90 06 .........T...9.. 00:22:29.547 00000130 af 3e 03 a4 2c f2 71 64 29 89 d5 33 c1 8c a6 7f .>..,.qd)..3.... 00:22:29.547 00000140 ce 70 27 4b 98 82 62 d1 fb a2 4f 18 4f 90 94 81 .p'K..b...O.O... 00:22:29.547 00000150 e1 b4 fc 06 fb 88 6f 2b f3 34 63 9a de e5 aa 84 ......o+.4c..... 00:22:29.547 00000160 80 a1 0e e4 07 17 a8 e4 47 b1 bb c1 5a 62 fa af ........G...Zb.. 00:22:29.547 00000170 dc 7b 03 4e 31 3b 05 89 5d 8f 69 e5 70 3d 90 2d .{.N1;..].i.p=.- 00:22:29.547 00000180 66 8e e4 87 5b 00 cf ea 6a 08 41 3f e9 82 7d 99 f...[...j.A?..}. 00:22:29.547 00000190 ed e1 d0 e9 3c a6 45 06 7e a2 3d 45 6f 3e 10 14 ....<.E.~.=Eo>.. 00:22:29.547 000001a0 89 83 ee f9 87 2c b9 64 dd 40 81 4f 61 f4 35 fb .....,.d.@.Oa.5. 00:22:29.547 000001b0 7b 5a ce 40 4a 7d 02 ca 16 f6 29 4d 7d 3b ff bd {Z.@J}....)M};.. 00:22:29.547 000001c0 b0 d3 0f 74 d3 8c 90 fa 2f fa 70 f7 32 09 55 61 ...t..../.p.2.Ua 00:22:29.547 000001d0 cf 79 93 fd 79 10 d8 37 5e db 47 41 c3 67 ee 7d .y..y..7^.GA.g.} 00:22:29.547 000001e0 10 7f 6c 82 49 e7 1c bb 69 c2 ba 19 9d fc a9 2d ..l.I...i......- 00:22:29.547 000001f0 fb 28 dc 62 3d b3 69 f4 87 27 f2 19 5b ec 4a df .(.b=.i..'..[.J. 00:22:29.547 host pubkey: 00:22:29.547 00000000 43 15 dc 58 b4 6e ee 6e 7b 67 a6 7c 22 77 1c 4b C..X.n.n{g.|"w.K 00:22:29.547 00000010 50 02 b4 91 19 3b 83 ae 14 b2 0d a7 fe b8 84 01 P....;.......... 00:22:29.547 00000020 a6 8d f5 39 db 71 42 c1 1e e4 38 fc 6e b1 23 85 ...9.qB...8.n.#. 00:22:29.547 00000030 34 5c 13 a4 7f f4 5b 46 1e cc 33 d4 54 bd 5a 1b 4\....[F..3.T.Z. 00:22:29.547 00000040 37 de bb c2 19 6f fa 99 13 1c bb 3d 90 6a 58 6d 7....o.....=.jXm 00:22:29.547 00000050 cf ea 7c 64 c5 c8 62 ac c0 92 6a 77 78 f9 7a 98 ..|d..b...jwx.z. 00:22:29.547 00000060 16 9e 1a f8 3d e5 36 75 0a ff aa 66 72 58 59 cc ....=.6u...frXY. 00:22:29.547 00000070 2c eb bf 9e ff 5e 38 01 03 c8 22 a0 ca 11 81 24 ,....^8..."....$ 00:22:29.547 00000080 97 03 9a bf e6 13 c1 6f e5 8b ff 47 24 90 3f 95 .......o...G$.?. 00:22:29.547 00000090 54 9c 6d 91 33 a3 04 b0 56 d6 74 85 4f 3a a2 1d T.m.3...V.t.O:.. 00:22:29.547 000000a0 40 c4 52 d3 be 3f 55 b8 48 07 84 94 a1 10 68 c1 @.R..?U.H.....h. 00:22:29.547 000000b0 28 54 64 fc 0d e0 c4 b5 ff 00 da 86 7f d3 db fc (Td............. 00:22:29.547 000000c0 d7 f2 47 73 ac 6c d1 38 90 75 a5 bc 58 9a a8 f4 ..Gs.l.8.u..X... 00:22:29.547 000000d0 3e d8 9a ea 43 9b ef c0 8f b2 db 15 7e e4 17 14 >...C.......~... 00:22:29.547 000000e0 ae d7 6e 5f 69 88 1f b2 2d 9b 0c 3d 17 f0 1d 8a ..n_i...-..=.... 00:22:29.547 000000f0 d6 1c 8d f6 b6 be 70 c0 ad 8c d3 ef 3c e0 3a 13 ......p.....<.:. 00:22:29.547 00000100 a2 53 88 61 a0 1e b4 69 a4 33 4d 51 17 a1 10 7a .S.a...i.3MQ...z 00:22:29.547 00000110 db 8a 9e 41 b4 ea 5f ef 7b 75 1a 3c 45 bb 53 b3 ...A.._.{u..5@< 00:22:29.547 00000150 ff 38 2f bc 35 00 ca ec 24 55 51 f0 c9 9e d6 15 .8/.5...$UQ..... 00:22:29.547 00000160 1c 15 54 70 f2 30 a7 8e fb fc 9d aa b3 cd 0e f7 ..Tp.0.......... 00:22:29.547 00000170 1e cc 9c d7 db dd 52 fe b4 af 9b 00 39 6a 3b c7 ......R.....9j;. 00:22:29.547 00000180 78 3a d8 1b aa 61 84 98 97 27 24 c2 1d 9f 47 81 x:...a...'$...G. 00:22:29.547 00000190 87 c7 f5 f5 b2 7e 77 de 34 3d ec e3 ff 6e 40 38 .....~w.4=...n@8 00:22:29.547 000001a0 ec bf 02 8f 1f 63 ad 7a 8a bb 0c b7 3c b8 72 43 .....c.z....<.rC 00:22:29.547 000001b0 f5 2f 06 51 1a 7a c0 46 f9 dc 4e 8f 4a e0 4c 64 ./.Q.z.F..N.J.Ld 00:22:29.547 000001c0 c0 09 9a a8 cf 84 a1 d0 be db e2 24 b5 7a 26 8f ...........$.z&. 00:22:29.547 000001d0 1e a9 0b c5 a8 65 87 4a 46 4f 08 1a 15 43 c2 e2 .....e.JFO...C.. 00:22:29.547 000001e0 6f de bf 9c aa 4d 4d 89 06 c4 84 7f ef 2d b7 6e o....MM......-.n 00:22:29.547 000001f0 cd 24 fd 73 f1 72 be 8a a0 e0 ce 4d 62 f9 33 e4 .$.s.r.....Mb.3. 00:22:29.547 dh secret: 00:22:29.547 00000000 03 ae 5b 04 96 b5 f1 cf 3f 81 78 38 3a 44 80 a7 ..[.....?.x8:D.. 00:22:29.547 00000010 a4 4f ed 9e 45 47 50 99 1a 9d c6 2b 02 07 4e 6b .O..EGP....+..Nk 00:22:29.547 00000020 13 83 f2 68 c1 bc 0e 0e 06 63 9f eb 22 fe a2 ba ...h.....c.."... 00:22:29.547 00000030 df 48 b8 40 45 80 d9 ba e1 7d 19 13 ca 56 f9 8c .H.@E....}...V.. 00:22:29.547 00000040 9a 88 c9 aa e0 ae 0d 85 a4 3d d2 0e b5 ad 31 73 .........=....1s 00:22:29.547 00000050 15 db 1f 20 b3 12 29 3a 7b af 57 d4 ac 45 33 69 ... ..):{.W..E3i 00:22:29.547 00000060 e4 cd 01 45 63 71 af cc ee d6 4d 11 07 03 de 42 ...Ecq....M....B 00:22:29.547 00000070 01 4e 02 4c 2f fe e0 25 7c 56 75 f2 42 13 83 3f .N.L/..%|Vu.B..? 00:22:29.547 00000080 65 d3 f6 08 c0 5e 0c 22 d0 fc a8 40 b5 d0 85 53 e....^."...@...S 00:22:29.547 00000090 2b 6e 4f 4a d8 e2 fd 7e 55 0c d7 85 9d 98 21 f5 +nOJ...~U.....!. 00:22:29.547 000000a0 24 6c cc b2 2d d6 b7 41 7d b0 2e 18 b3 90 fe 42 $l..-..A}......B 00:22:29.547 000000b0 2a d8 a3 e4 5e 8c 7c 47 97 89 ee 0d 8f 6a 71 02 *...^.|G.....jq. 00:22:29.547 000000c0 cb 01 93 fb 29 62 47 59 1a 56 f8 40 d0 a4 2a 6a ....)bGY.V.@..*j 00:22:29.547 000000d0 7e 33 f5 28 cd 15 7c d1 8c 37 3b 85 40 97 29 d9 ~3.(..|..7;.@.). 00:22:29.547 000000e0 65 ce c1 97 7d 3b b2 ca 35 64 30 6e 9b a9 a4 e4 e...};..5d0n.... 00:22:29.547 000000f0 5a 69 d2 04 1e be 79 03 96 8d 0f 9a 82 7c 16 e4 Zi....y......|.. 00:22:29.547 00000100 87 8d 4e 48 e0 7d 84 a2 1e ac a9 92 5b 76 63 a0 ..NH.}......[vc. 00:22:29.547 00000110 93 18 d7 cd a0 cc f7 4f a0 c2 6d 94 fd 3c 8e b6 .......O..m..<.. 00:22:29.547 00000120 b8 be 26 f1 ac c9 12 cd f3 86 df b4 51 f4 0a bb ..&.........Q... 00:22:29.547 00000130 c4 a0 30 c8 04 89 b7 ea 8a 0c c4 2d 48 ef 9b ac ..0........-H... 00:22:29.547 00000140 f2 04 db 26 fd 54 1a 7c 29 0a 81 3f 91 0c 87 45 ...&.T.|)..?...E 00:22:29.547 00000150 17 d7 ac cb 67 fa f0 87 b5 bd 6b 28 ac f7 8e cd ....g.....k(.... 00:22:29.547 00000160 e9 87 49 75 e0 ff 3e ec 35 ae 9b 8c e5 30 24 2e ..Iu..>.5....0$. 00:22:29.547 00000170 8d b7 9f 23 7b b0 89 5f c7 fd eb 71 3e b9 4f 96 ...#{.._...q>.O. 00:22:29.547 00000180 f0 c6 14 82 14 2f 02 8f 71 d3 3d 4a 53 5b 88 07 ...../..q.=JS[.. 00:22:29.547 00000190 a1 1a de 1c f0 0a 19 e8 b2 6c 60 ff eb e8 ef 06 .........l`..... 00:22:29.547 000001a0 c1 42 c8 50 5c 17 70 2e a7 a5 b1 b0 ac 11 e2 a9 .B.P\.p......... 00:22:29.547 000001b0 0b 85 86 99 19 6e 84 e3 97 f6 a2 dd f9 90 3c 84 .....n........<. 00:22:29.547 000001c0 1e 69 ba 7f da 0b 0b ba ff d5 d1 ce 4f f9 b3 c7 .i..........O... 00:22:29.547 000001d0 5c 8f 0e 74 3b 14 59 93 23 49 72 29 c1 c7 4d fc \..t;.Y.#Ir)..M. 00:22:29.547 000001e0 9b 3c 59 c0 71 9d cf 02 8f 7e b8 26 b0 d8 e4 c8 ........ 00:22:29.547 00000060 82 3c 7a 7d f7 0c c7 86 4d 55 a1 aa 97 5e e4 a7 .W.Y 00:22:29.547 000000a0 a3 d8 e9 28 6b bb 70 e6 10 56 33 42 8a dc cb d3 ...(k.p..V3B.... 00:22:29.547 000000b0 4a 8e cb 79 e6 ac 17 57 55 a8 0c 30 e9 82 70 5d J..y...WU..0..p] 00:22:29.547 000000c0 67 53 69 e4 cc ef 16 80 68 98 50 0d 2c b3 f1 0d gSi.....h.P.,... 00:22:29.547 000000d0 70 99 32 e3 42 0d 80 18 bd 48 d9 95 d9 f1 d1 f3 p.2.B....H...... 00:22:29.547 000000e0 38 fa 7a c8 be de 17 fb 4a a8 9e 91 06 26 ac d5 8.z.....J....&.. 00:22:29.547 000000f0 6a 4e df 35 ad 53 d4 44 7c 13 a5 87 50 0a 04 41 jN.5.S.D|...P..A 00:22:29.547 00000100 3f 99 1a 29 3d ba 06 29 d0 1e 33 b6 8c c5 b2 aa ?..)=..)..3..... 00:22:29.547 00000110 05 10 dd 31 33 d4 28 1a da 4b 13 3a 42 75 d3 93 ...13.(..K.:Bu.. 00:22:29.547 00000120 1e 11 1d 7f ff ea 68 45 96 ad 48 14 3f 0c cd 06 ......hE..H.?... 00:22:29.547 00000130 1f 7b 21 99 33 76 b2 9f 73 a7 0f 9e b9 0d 90 17 .{!.3v..s....... 00:22:29.547 00000140 fa 39 c3 79 65 fd 5d 68 93 44 42 15 1d ee 03 7b .9.ye.]h.DB....{ 00:22:29.547 00000150 7e 30 6f 12 f6 76 1a 01 13 95 60 aa 4b f8 8e 72 ~0o..v....`.K..r 00:22:29.547 00000160 b5 e7 35 0a f1 fb 45 ec 9c 41 64 bd 8d b1 e5 74 ..5...E..Ad....t 00:22:29.547 00000170 97 13 db 84 52 72 80 0f ea 5e 8a c2 cb 40 3b b5 ....Rr...^...@;. 00:22:29.547 00000180 e4 78 81 b7 5b a7 7e 7f fa 9b f2 29 4f 00 da ae .x..[.~....)O... 00:22:29.547 00000190 29 74 31 51 c8 c4 2f f0 e3 d4 9a 19 86 ab e9 54 )t1Q../........T 00:22:29.547 000001a0 74 1a d9 dd 0b d8 b4 bd 57 be 9c b6 19 96 19 2e t.......W....... 00:22:29.547 000001b0 cb 64 a9 2f 52 65 b6 91 a1 d8 94 e8 eb eb 7b c0 .d./Re........{. 00:22:29.547 000001c0 20 1d 2b 31 5f 8c d3 4f 7c ed c8 eb 14 d0 0a fb .+1_..O|....... 00:22:29.547 000001d0 2d 39 12 d5 dd 1b 6b 00 fc 28 f9 32 ba 0b 9b 7d -9....k..(.2...} 00:22:29.547 000001e0 48 76 a4 66 64 fa 77 27 ac 88 98 98 5f 57 9c 0e Hv.fd.w'...._W.. 00:22:29.547 000001f0 96 f8 a6 f8 ba 61 c1 6f bd 34 1a 5b 8c d5 21 f2 .....a.o.4.[..!. 00:22:29.547 host pubkey: 00:22:29.547 00000000 df b9 73 fd 31 35 26 96 36 11 de 0d 98 4d fc 51 ..s.15&.6....M.Q 00:22:29.547 00000010 be d1 72 63 84 79 4b c9 36 71 b1 a0 d9 47 45 5f ..rc.yK.6q...GE_ 00:22:29.547 00000020 4a 50 c2 86 26 9b 9b ae 50 e5 4f e3 e6 71 aa 47 JP..&...P.O..q.G 00:22:29.547 00000030 31 01 13 a5 7c 87 59 62 76 b3 9c 20 69 3a c2 8c 1...|.Ybv.. i:.. 00:22:29.547 00000040 00 ad 0d ce fb cd 44 55 34 81 ca f6 02 3a 0b 2e ......DU4....:.. 00:22:29.547 00000050 c9 f7 14 b6 a7 bd 9a c1 5d f8 e3 04 78 32 79 c5 ........]...x2y. 00:22:29.547 00000060 76 ec 6a 9c dc 30 ea 20 a6 12 67 28 58 a3 5d 50 v.j..0. ..g(X.]P 00:22:29.547 00000070 68 75 56 0b c8 0c d9 cd 9e 77 72 a1 9c 82 96 68 huV......wr....h 00:22:29.547 00000080 ab fd 31 10 b9 41 f3 87 39 9e 9b 01 14 62 34 02 ..1..A..9....b4. 00:22:29.547 00000090 b1 9f 6b 69 35 aa e0 8b e6 83 e3 b8 e0 0b dc 30 ..ki5..........0 00:22:29.547 000000a0 13 17 cb 58 af 03 19 f1 f1 92 a3 6b e6 de 55 6e ...X.......k..Un 00:22:29.547 000000b0 2f 3c 25 d6 a3 86 30 bd 46 ee c4 ec 74 1e 8d 74 /<%...0.F...t..t 00:22:29.547 000000c0 2a 26 ec 08 69 c6 72 df 47 25 7a 26 d4 a4 89 3c *&..i.r.G%z&...< 00:22:29.547 000000d0 ae 2c 58 80 5f 37 dd 7d af 34 54 ed 4f 8c c4 6b .,X._7.}.4T.O..k 00:22:29.547 000000e0 55 4d 44 3a 58 d1 68 45 8c 52 cc db 20 e7 57 d5 UMD:X.hE.R.. .W. 00:22:29.547 000000f0 e6 1f 04 e9 d4 b8 f3 23 ef c5 95 e5 b4 52 fe 8f .......#.....R.. 00:22:29.547 00000100 10 8a 3b 17 be 93 cc dc 05 3f c1 41 bf b2 70 c0 ..;......?.A..p. 00:22:29.547 00000110 93 42 5b df 84 ac e2 8e c4 c0 37 34 64 49 53 f7 .B[.......74dIS. 00:22:29.547 00000120 f6 a2 6a b9 7c 39 2b ad 62 c6 37 5b 2e 1e 72 3b ..j.|9+.b.7[..r; 00:22:29.547 00000130 33 1e 83 b2 9d 71 0d 5d be 67 1d df d0 f0 4d 3d 3....q.].g....M= 00:22:29.548 00000140 e7 e9 41 77 d2 dc 0d bd db 08 81 30 70 75 f7 c0 ..Aw.......0pu.. 00:22:29.548 00000150 fd 69 6d 9d f1 23 9f 82 7e af 4d d3 21 b9 3d a3 .im..#..~.M.!.=. 00:22:29.548 00000160 29 4c e9 2e 03 6d cb 6b fa b9 4d 35 7f c1 d0 8a )L...m.k..M5.... 00:22:29.548 00000170 17 5b 8e 8a 47 47 76 3d 1a 39 af 78 fb e6 7f ac .[..GGv=.9.x.... 00:22:29.548 00000180 54 50 da 49 dc 88 ff 5d 61 2b 01 1b 2b 00 a1 cd TP.I...]a+..+... 00:22:29.548 00000190 f8 95 79 e4 f9 ae 0f 1c 77 8c 3c 00 6d 72 d6 a2 ..y.....w.<.mr.. 00:22:29.548 000001a0 1e 07 dc fc 3b ea c4 ab e8 0d cd f3 b2 57 d3 e6 ....;........W.. 00:22:29.548 000001b0 e2 9b 9b 72 89 01 64 52 a9 d5 bd 42 7b 5f 28 99 ...r..dR...B{_(. 00:22:29.548 000001c0 18 50 49 20 0d a5 61 bc b2 37 c5 e2 66 62 21 70 .PI ..a..7..fb!p 00:22:29.548 000001d0 b6 e1 ab 2f f6 9e 1a 31 73 74 ed 0c ed 9a ad f8 .../...1st...... 00:22:29.548 000001e0 70 19 b2 46 8f 31 4e 51 46 73 41 9c dc b9 e0 21 p..F.1NQFsA....! 00:22:29.548 000001f0 24 8e 99 b0 75 b3 f0 8e 6a 3d e0 91 01 98 2c 70 $...u...j=....,p 00:22:29.548 dh secret: 00:22:29.548 00000000 7b 67 20 94 ba 57 9f 9e 49 3e 71 38 8f ac db 6d {g ..W..I>q8...m 00:22:29.548 00000010 4c 9f 07 e9 a9 ee df 59 93 fc 7c cc ab 95 98 33 L......Y..|....3 00:22:29.548 00000020 c2 2e 7b 7b 3a 22 bb a2 fa 6e 40 67 5a 8b 41 14 ..{{:"...n@gZ.A. 00:22:29.548 00000030 cb da 5d 70 1b 67 b2 d5 1c cc 42 76 a4 b7 00 b5 ..]p.g....Bv.... 00:22:29.548 00000040 81 be 76 cc cd 3a ef a1 fa f2 ff 20 c8 85 98 fe ..v..:..... .... 00:22:29.548 00000050 34 48 68 93 e9 c9 c0 d9 a9 95 a1 d9 32 09 9a c3 4Hh.........2... 00:22:29.548 00000060 20 eb 70 f7 90 c9 8d 2c 75 da 3c 52 e0 b9 aa 12 .p....,u....Z.....vq.... 00:22:29.548 000001d0 d6 8f 0a ce a0 33 61 5d d3 f9 f2 5f 28 5a c0 dc .....3a]..._(Z.. 00:22:29.548 000001e0 d9 74 49 22 e3 ee 68 f8 0d 8a 48 d7 46 eb 83 6b .tI"..h...H.F..k 00:22:29.548 000001f0 e9 2c 0b 98 45 eb 2c 5b e9 de 96 a7 93 ec 54 75 .,..E.,[......Tu 00:22:29.548 [2024-09-27 13:27:24.447087] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key4, hash=3, dhgroup=3, seq=3775755301, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.548 [2024-09-27 13:27:24.447420] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.548 [2024-09-27 13:27:24.474246] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.548 [2024-09-27 13:27:24.474755] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.548 [2024-09-27 13:27:24.475206] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.548 [2024-09-27 13:27:24.527607] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.548 [2024-09-27 13:27:24.527882] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.548 [2024-09-27 13:27:24.528377] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 3 (ffdhe4096) 00:22:29.548 [2024-09-27 13:27:24.528742] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.548 [2024-09-27 13:27:24.529279] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.548 ctrlr pubkey: 00:22:29.548 00000000 da 04 5e a9 bc 77 9e 32 23 03 89 dd ff f7 d9 b4 ..^..w.2#....... 00:22:29.548 00000010 f2 85 ed db cd 53 9e 2f fb f8 ab ee e0 bf 90 8b .....S./........ 00:22:29.548 00000020 31 c3 75 8b 90 ae 43 2c 68 28 7e bd 37 5e ce 09 1.u...C,h(~.7^.. 00:22:29.548 00000030 c5 4e 37 e9 a0 83 de 7c 1b 7b 38 bf b8 a4 5a 36 .N7....|.{8...Z6 00:22:29.548 00000040 ce 4c c9 50 d1 f8 2b d1 65 71 9c a3 40 38 5f 6b .L.P..+.eq..@8_k 00:22:29.548 00000050 b2 e9 30 14 1f 9e e8 2c 3e 18 d8 80 b7 fc 17 b4 ..0....,>....... 00:22:29.548 00000060 82 3c 7a 7d f7 0c c7 86 4d 55 a1 aa 97 5e e4 a7 .W.Y 00:22:29.548 000000a0 a3 d8 e9 28 6b bb 70 e6 10 56 33 42 8a dc cb d3 ...(k.p..V3B.... 00:22:29.548 000000b0 4a 8e cb 79 e6 ac 17 57 55 a8 0c 30 e9 82 70 5d J..y...WU..0..p] 00:22:29.548 000000c0 67 53 69 e4 cc ef 16 80 68 98 50 0d 2c b3 f1 0d gSi.....h.P.,... 00:22:29.548 000000d0 70 99 32 e3 42 0d 80 18 bd 48 d9 95 d9 f1 d1 f3 p.2.B....H...... 00:22:29.548 000000e0 38 fa 7a c8 be de 17 fb 4a a8 9e 91 06 26 ac d5 8.z.....J....&.. 00:22:29.548 000000f0 6a 4e df 35 ad 53 d4 44 7c 13 a5 87 50 0a 04 41 jN.5.S.D|...P..A 00:22:29.548 00000100 3f 99 1a 29 3d ba 06 29 d0 1e 33 b6 8c c5 b2 aa ?..)=..)..3..... 00:22:29.548 00000110 05 10 dd 31 33 d4 28 1a da 4b 13 3a 42 75 d3 93 ...13.(..K.:Bu.. 00:22:29.548 00000120 1e 11 1d 7f ff ea 68 45 96 ad 48 14 3f 0c cd 06 ......hE..H.?... 00:22:29.548 00000130 1f 7b 21 99 33 76 b2 9f 73 a7 0f 9e b9 0d 90 17 .{!.3v..s....... 00:22:29.548 00000140 fa 39 c3 79 65 fd 5d 68 93 44 42 15 1d ee 03 7b .9.ye.]h.DB....{ 00:22:29.548 00000150 7e 30 6f 12 f6 76 1a 01 13 95 60 aa 4b f8 8e 72 ~0o..v....`.K..r 00:22:29.548 00000160 b5 e7 35 0a f1 fb 45 ec 9c 41 64 bd 8d b1 e5 74 ..5...E..Ad....t 00:22:29.548 00000170 97 13 db 84 52 72 80 0f ea 5e 8a c2 cb 40 3b b5 ....Rr...^...@;. 00:22:29.548 00000180 e4 78 81 b7 5b a7 7e 7f fa 9b f2 29 4f 00 da ae .x..[.~....)O... 00:22:29.548 00000190 29 74 31 51 c8 c4 2f f0 e3 d4 9a 19 86 ab e9 54 )t1Q../........T 00:22:29.548 000001a0 74 1a d9 dd 0b d8 b4 bd 57 be 9c b6 19 96 19 2e t.......W....... 00:22:29.548 000001b0 cb 64 a9 2f 52 65 b6 91 a1 d8 94 e8 eb eb 7b c0 .d./Re........{. 00:22:29.548 000001c0 20 1d 2b 31 5f 8c d3 4f 7c ed c8 eb 14 d0 0a fb .+1_..O|....... 00:22:29.548 000001d0 2d 39 12 d5 dd 1b 6b 00 fc 28 f9 32 ba 0b 9b 7d -9....k..(.2...} 00:22:29.548 000001e0 48 76 a4 66 64 fa 77 27 ac 88 98 98 5f 57 9c 0e Hv.fd.w'...._W.. 00:22:29.548 000001f0 96 f8 a6 f8 ba 61 c1 6f bd 34 1a 5b 8c d5 21 f2 .....a.o.4.[..!. 00:22:29.548 host pubkey: 00:22:29.548 00000000 5e 2d 87 7b ba c1 48 73 5a e1 2b 0a 47 fb 2e 51 ^-.{..HsZ.+.G..Q 00:22:29.548 00000010 d5 b2 f2 00 93 37 6d 7b a6 3c 0e de ea 78 eb cf .....7m{.<...x.. 00:22:29.548 00000020 6b 61 be 41 22 c1 6f 6f e8 8b fd 50 76 f6 33 42 ka.A".oo...Pv.3B 00:22:29.548 00000030 16 77 e1 bd d6 1f f2 36 5f c9 78 c1 52 b5 78 29 .w.....6_.x.R.x) 00:22:29.548 00000040 a4 ec e1 43 dd 42 f8 99 a3 ad 76 c5 22 99 cc ab ...C.B....v."... 00:22:29.548 00000050 47 21 1f 64 44 8d de af 40 14 10 54 be a9 dd e3 G!.dD...@..T.... 00:22:29.548 00000060 94 fa 0b 71 53 2a f4 53 b3 50 52 27 d6 fa 8c f8 ...qS*.S.PR'.... 00:22:29.548 00000070 bf 1b c2 a9 83 df 90 25 bd 25 5c 48 03 f9 b1 6c .......%.%\H...l 00:22:29.548 00000080 98 b7 c4 1e c9 1c 0d c8 31 bc 79 1e d3 51 ea 89 ........1.y..Q.. 00:22:29.548 00000090 8d bd fe d3 a8 29 f5 39 ee 79 fc a6 67 4e 29 eb .....).9.y..gN). 00:22:29.548 000000a0 b0 71 98 ee 4d 75 2c 09 57 e5 a3 67 2f 60 cb 4e .q..Mu,.W..g/`.N 00:22:29.548 000000b0 f7 d4 d7 dd 83 a6 75 ec 13 06 9c 97 98 91 c2 68 ......u........h 00:22:29.548 000000c0 f0 36 05 8b 78 f9 e0 37 49 9a 76 af 30 cc 8a ff .6..x..7I.v.0... 00:22:29.548 000000d0 85 e8 6d fd 7f 08 ff c2 51 84 0a 4a 69 78 75 7e ..m.....Q..Jixu~ 00:22:29.548 000000e0 32 2c d3 7d dc 81 eb 1f ea cf 59 39 65 c5 46 61 2,.}......Y9e.Fa 00:22:29.548 000000f0 c9 32 f1 61 bb 13 cb 4b 26 64 86 50 f5 c1 39 14 .2.a...K&d.P..9. 00:22:29.548 00000100 fb 14 7b cf b3 2c 59 df 3e 63 16 ad c8 b3 5c d5 ..{..,Y.>c....\. 00:22:29.548 00000110 db 50 b9 4f 60 88 33 95 1a 96 f1 62 88 38 35 90 .P.O`.3....b.85. 00:22:29.548 00000120 8e be 37 60 62 33 a8 9f 9c fe 59 fc 53 ba 47 27 ..7`b3....Y.S.G' 00:22:29.548 00000130 42 c1 2f 08 ee 90 b0 e3 6d 5a 87 46 58 70 a1 03 B./.....mZ.FXp.. 00:22:29.548 00000140 fe 7c 12 d1 7e ff 26 a1 7b 4f 51 18 a1 0a 5b 81 .|..~.&.{OQ...[. 00:22:29.548 00000150 9a 94 39 5a 54 ec ea 20 c4 91 4b ab fc dd 06 80 ..9ZT.. ..K..... 00:22:29.548 00000160 25 2e 6b dd e2 07 e6 8f dc 62 9d 9e 57 97 c5 4d %.k......b..W..M 00:22:29.548 00000170 f1 22 f0 40 1b b6 33 24 6d 68 09 ee 43 c5 ab 8d .".@..3$mh..C... 00:22:29.548 00000180 9b 59 ef 1d 88 9a 07 d5 f0 de 06 37 5b 99 a5 0c .Y.........7[... 00:22:29.548 00000190 98 a4 bc ef 94 c6 dc fe e0 e8 1e 6c 24 fa de ce ...........l$... 00:22:29.548 000001a0 f0 7f a1 d2 f4 a5 2b 89 ea 56 69 23 0d 50 75 3d ......+..Vi#.Pu= 00:22:29.548 000001b0 a2 58 84 9a f4 83 5e 5b c5 9c 48 5f a4 ea 2a 8e .X....^[..H_..*. 00:22:29.548 000001c0 2d b6 8f ee ac e7 10 ce 1c 49 a0 d1 1f fa d8 6a -........I.....j 00:22:29.548 000001d0 97 bf 21 33 d1 95 c5 21 76 4a 8b 59 26 00 05 84 ..!3...!vJ.Y&... 00:22:29.548 000001e0 f1 31 40 fe c6 ff b8 fd 88 20 e6 2c 2f 5d e2 bf .1@...... .,/].. 00:22:29.548 000001f0 9c f4 1b 81 6a 29 49 8f a2 dd 5b 85 3c 10 c8 14 ....j)I...[.<... 00:22:29.548 dh secret: 00:22:29.548 00000000 1b 05 a6 a6 72 8f cd 6c 71 ff 68 2e b8 87 e4 87 ....r..lq.h..... 00:22:29.548 00000010 d1 22 6f f2 d9 a0 21 a5 c7 1d ec eb e7 65 f1 b5 ."o...!......e.. 00:22:29.548 00000020 b6 10 33 72 89 7c ca d0 bc 31 0f 52 8d 09 e9 8b ..3r.|...1.R.... 00:22:29.548 00000030 c4 0a 55 00 9e 93 59 9e a4 ab 05 8c fd 9a 26 c0 ..U...Y.......&. 00:22:29.548 00000040 81 1e 87 0d 50 50 32 21 3f fc d3 79 e2 c5 fb 36 ....PP2!?..y...6 00:22:29.548 00000050 c8 32 0e 87 37 b1 ca a0 17 2a 34 79 d0 ce a4 a0 .2..7....*4y.... 00:22:29.548 00000060 3b 14 c8 c2 90 89 d6 83 d0 39 6f 06 f5 29 a8 a8 ;........9o..).. 00:22:29.548 00000070 c3 0a 3c cf 81 84 b9 f7 50 51 51 b1 54 6b d1 48 ..<.....PQQ.Tk.H 00:22:29.548 00000080 2f 51 70 89 a4 ed 27 cc ae b5 a7 73 5e 3e dd 9a /Qp...'....s^>.. 00:22:29.548 00000090 7f 8a 56 48 67 60 86 c6 86 e8 4a ba be 52 1f cf ..VHg`....J..R.. 00:22:29.548 000000a0 e3 ab 39 2d 7f bf 9c 7a c6 21 06 9d 07 71 4f 5b ..9-...z.!...qO[ 00:22:29.548 000000b0 15 1b 6a 23 66 71 f7 87 1f ac 39 f2 ae a4 df 26 ..j#fq....9....& 00:22:29.549 000000c0 ba 4b ab 2a c1 6a 91 a6 ca 36 f4 7c ef 4b a3 ae .K.*.j...6.|.K.. 00:22:29.549 000000d0 3e fb ad 53 41 b5 0c a9 84 38 7d 17 ff 84 a1 9b >..SA....8}..... 00:22:29.549 000000e0 34 9a 26 bb ef 73 d8 7e a2 32 4c 3d 60 6b e1 41 4.&..s.~.2L=`k.A 00:22:29.549 000000f0 35 2e 29 73 d4 14 fe f4 cf 7d 5b ad f0 7a 81 46 5.)s.....}[..z.F 00:22:29.549 00000100 2a 9d 74 12 5c ae a2 a3 ad f8 27 dc 3d 64 4a 13 *.t.\.....'.=dJ. 00:22:29.549 00000110 08 06 7d 1a 50 79 20 87 ad 2d fb 77 bd 07 fb aa ..}.Py ..-.w.... 00:22:29.549 00000120 16 72 b5 31 9c 07 38 49 b9 2d b9 77 0a e2 33 ba .r.1..8I.-.w..3. 00:22:29.549 00000130 47 b9 80 4f 4d ee 3e 26 34 18 12 83 9f 81 3c e7 G..OM.>&4.....<. 00:22:29.549 00000140 ae 65 83 59 61 d2 ba f8 49 1b 4d d5 66 f0 8f 0c .e.Ya...I.M.f... 00:22:29.549 00000150 71 84 06 0d 2b 7f c7 64 ef 9f cc eb e3 ae ff 97 q...+..d........ 00:22:29.549 00000160 35 ae c5 3e cf 42 61 1a df b3 50 e6 ff 64 19 02 5..>.Ba...P..d.. 00:22:29.549 00000170 68 97 46 38 35 b3 8d 4d a3 ae 57 dc 94 cd 42 b0 h.F85..M..W...B. 00:22:29.549 00000180 fa 41 97 6b 8c 4a d9 50 7b f5 1e 80 22 ef 54 65 .A.k.J.P{...".Te 00:22:29.549 00000190 8d 3f 7f bd 5f 4f 7e f0 56 70 7a ac af ff 53 b8 .?.._O~.Vpz...S. 00:22:29.549 000001a0 3f c3 79 b6 4e cf 55 ea 81 bb 7e 78 b0 e6 25 be ?.y.N.U...~x..%. 00:22:29.549 000001b0 98 ca af 76 38 f4 ae 74 d3 80 5d d7 ae 1c e7 d5 ...v8..t..]..... 00:22:29.549 000001c0 99 31 c4 b3 6b 06 a3 04 9f f2 c7 20 a4 ed f9 c1 .1..k...... .... 00:22:29.549 000001d0 b3 e0 c1 0e b1 89 32 98 c5 f9 cb 9d 79 f6 54 35 ......2.....y.T5 00:22:29.549 000001e0 d2 89 c7 ea 0c aa 84 9c 29 2d aa 06 c6 3b d1 3a ........)-...;.: 00:22:29.549 000001f0 8a 49 d3 7a aa 55 92 0f 51 d9 aa 8e fc b1 10 72 .I.z.U..Q......r 00:22:29.549 [2024-09-27 13:27:24.560779] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key4, hash=3, dhgroup=3, seq=3775755302, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.549 [2024-09-27 13:27:24.561237] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.549 [2024-09-27 13:27:24.587140] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.549 [2024-09-27 13:27:24.587442] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.549 [2024-09-27 13:27:24.587565] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.549 [2024-09-27 13:27:24.714795] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.549 [2024-09-27 13:27:24.715060] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.549 [2024-09-27 13:27:24.715247] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.549 [2024-09-27 13:27:24.715430] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.549 [2024-09-27 13:27:24.715740] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.549 ctrlr pubkey: 00:22:29.549 00000000 1e de 53 2b dc 5c 5d 29 64 b5 2b 20 bd 1d b3 e5 ..S+.\])d.+ .... 00:22:29.549 00000010 f2 11 af f3 9b f1 e7 d7 96 3d a7 59 64 22 2f 09 .........=.Yd"/. 00:22:29.549 00000020 3a b4 68 d0 ea 99 af 6c d6 8d c6 7d 97 59 a3 bd :.h....l...}.Y.. 00:22:29.549 00000030 7d 76 4d ca 12 49 30 cf 2b 18 bb 4e 73 39 cb 43 }vM..I0.+..Ns9.C 00:22:29.549 00000040 17 11 da b1 74 b7 20 dd d9 4f a8 3c 80 c4 d8 7a ....t. ..O.<...z 00:22:29.549 00000050 2f a7 a1 04 f8 e1 3c 09 f4 89 61 f7 b6 59 6b b4 /.....<...a..Yk. 00:22:29.549 00000060 55 5a 53 cd d8 6e c2 c0 3a b8 08 ad 3e 8b 91 42 UZS..n..:...>..B 00:22:29.549 00000070 26 62 a1 84 19 f4 62 f2 39 b1 31 82 fc 06 f4 d9 &b....b.9.1..... 00:22:29.549 00000080 9b db 52 71 36 20 a5 e4 19 e1 f5 f0 63 b4 a1 df ..Rq6 ......c... 00:22:29.549 00000090 04 a4 f3 e1 5f de bc 21 0a a7 7d 3f fb 9d f9 97 ...._..!..}?.... 00:22:29.549 000000a0 2e 18 e1 c0 26 ac 2e 5b 42 a6 b2 da 63 a5 c3 d7 ....&..[B...c... 00:22:29.549 000000b0 7e c7 27 c0 fc 0e ea f2 4c 9f de e8 24 9f 11 d4 ~.'.....L...$... 00:22:29.549 000000c0 78 81 86 11 dd cc 46 53 48 62 a9 bc 16 32 c9 92 x.....FSHb...2.. 00:22:29.549 000000d0 ab 3d b2 ab c8 70 01 eb a9 0a 15 fe 4a f6 b9 8c .=...p......J... 00:22:29.549 000000e0 e1 46 f5 32 e7 ce 8e d4 2e 2f a3 40 9c 79 89 43 .F.2...../.@.y.C 00:22:29.549 000000f0 c8 cb af be 8c 72 01 2f dd 0b 01 53 22 6e 70 79 .....r./...S"npy 00:22:29.549 00000100 cb 1b 14 28 45 24 50 31 04 b4 f7 a1 8d 63 ac 1c ...(E$P1.....c.. 00:22:29.549 00000110 b3 fc a3 40 a4 b8 3e 6a 33 b6 33 fb 55 4c 4d dc ...@..>j3.3.ULM. 00:22:29.549 00000120 91 af 4a 20 a6 c7 f7 c9 2f 44 c0 b9 51 22 14 6f ..J ..../D..Q".o 00:22:29.549 00000130 aa bf 11 b1 d7 7c e8 eb 52 1e 26 d9 04 e3 f8 01 .....|..R.&..... 00:22:29.549 00000140 32 11 30 80 f5 b1 6a 40 39 e5 7e 56 c6 a8 1d 67 2.0...j@9.~V...g 00:22:29.549 00000150 74 d7 72 31 24 93 84 14 d6 54 ff 3a 3f 91 3c 7e t.r1$....T.:?.<~ 00:22:29.549 00000160 f9 42 99 8f 78 45 be 4d 27 55 2b 7e 5b 7b 8d f9 .B..xE.M'U+~[{.. 00:22:29.549 00000170 a4 7e 21 22 52 64 8d 23 e9 0e 7a aa b0 2b 19 41 .~!"Rd.#..z..+.A 00:22:29.549 00000180 eb cb aa f7 3b cf 12 9e 43 c3 f5 ff f5 e9 0a 1b ....;...C....... 00:22:29.549 00000190 da 60 c6 db 4f e6 13 49 d3 f2 2d d4 9d 0d dc bf .`..O..I..-..... 00:22:29.549 000001a0 1b 7e 48 58 f4 4d 91 49 ed ef 0c 3b 95 0a b5 d1 .~HX.M.I...;.... 00:22:29.549 000001b0 52 3d 06 82 e9 39 c8 ae 1c c7 b6 50 8a 76 60 3d R=...9.....P.v`= 00:22:29.549 000001c0 f7 f6 e6 c9 b2 53 c4 ff 05 23 04 82 65 6f a5 1d .....S...#..eo.. 00:22:29.549 000001d0 27 5c 44 dc a4 0a 4a 81 21 b1 d4 1c 43 27 ea 0f '\D...J.!...C'.. 00:22:29.549 000001e0 d3 f9 bd c9 75 e0 bf 51 41 01 10 2e c4 86 33 8f ....u..QA.....3. 00:22:29.549 000001f0 23 05 4e f9 e5 62 51 86 db fc 76 8f 80 a6 e4 d1 #.N..bQ...v..... 00:22:29.549 00000200 5a f5 cf f6 12 7d 12 42 1d 42 20 2b 77 c3 55 1c Z....}.B.B +w.U. 00:22:29.549 00000210 21 00 47 bd 3e 4f 48 a6 79 c0 8b aa d5 0b 19 53 !.G.>OH.y......S 00:22:29.549 00000220 36 e0 c8 19 0a 56 ed ab 4b c9 92 a9 65 8b 88 5b 6....V..K...e..[ 00:22:29.549 00000230 be 58 0e 04 7c a1 a2 ae 4f 76 6e 0d 86 c2 62 3d .X..|...Ovn...b= 00:22:29.549 00000240 b8 2d f5 26 26 79 76 b2 91 cd 2c 46 e1 52 ca 56 .-.&&yv...,F.R.V 00:22:29.549 00000250 f2 7d c2 1a 3b 8f 6b f2 85 39 d9 81 c3 ca 61 40 .}..;.k..9....a@ 00:22:29.549 00000260 1a c2 e5 e3 f1 05 4d dd dc ef 6b 47 10 c7 d1 3c ......M...kG...< 00:22:29.549 00000270 02 bb dd 06 84 e7 b3 8d 40 4c 94 da 45 e9 0b a0 ........@L..E... 00:22:29.549 00000280 69 34 03 ae 23 f7 06 cd 63 89 f6 c1 11 18 4b 29 i4..#...c.....K) 00:22:29.549 00000290 84 83 b6 31 74 7f 99 5b b9 47 cd 42 38 be e9 ed ...1t..[.G.B8... 00:22:29.549 000002a0 72 cc 75 4e f7 0d 0e 0b f3 1e d4 ac b4 21 89 99 r.uN.........!.. 00:22:29.549 000002b0 81 4d 83 f2 49 d6 0c 04 32 78 1a d1 37 13 8e 09 .M..I...2x..7... 00:22:29.549 000002c0 cf c4 01 1d 1b 78 8d 55 92 f7 f0 ec 89 dc e6 84 .....x.U........ 00:22:29.549 000002d0 65 f4 e9 42 2c 11 29 12 8f 34 6c c5 58 44 ac 64 e..B,.)..4l.XD.d 00:22:29.549 000002e0 a2 2e 15 fc 20 c4 25 e1 d3 bc 61 25 0c e5 98 01 .... .%...a%.... 00:22:29.549 000002f0 13 23 9e e8 14 5e ab 75 32 e5 a6 98 64 08 21 08 .#...^.u2...d.!. 00:22:29.549 host pubkey: 00:22:29.549 00000000 05 94 6b 8c e6 d0 32 7d 58 2d 90 89 9e 59 a0 9e ..k...2}X-...Y.. 00:22:29.549 00000010 9f 1b 32 58 17 2e 61 60 cd ba 59 87 8b 8b 7b 84 ..2X..a`..Y...{. 00:22:29.549 00000020 5d 2d 6c a5 e8 eb c0 0f 93 0e 5d d9 32 7e e7 8c ]-l.......].2~.. 00:22:29.549 00000030 b4 c6 2d 31 e9 65 db e1 36 ad 3f 75 bd 6c 3f c6 ..-1.e..6.?u.l?. 00:22:29.549 00000040 47 79 2b 6f 1a 4b 86 12 0d 1e 6e f4 f4 12 9f 6a Gy+o.K....n....j 00:22:29.549 00000050 22 f4 80 c5 0b 19 bb e4 c9 e0 12 76 da 99 7e 19 "..........v..~. 00:22:29.549 00000060 f0 eb 94 96 e3 27 2a 93 58 9a ae e3 27 ca cf 70 .....'*.X...'..p 00:22:29.549 00000070 c5 3a 81 61 8d 40 46 d5 f9 bd d7 b3 5e d6 22 77 .:.a.@F.....^."w 00:22:29.549 00000080 a6 2e c8 32 6e aa 00 c7 4b 28 dd c5 a3 de 13 c1 ...2n...K(...... 00:22:29.549 00000090 33 02 61 e9 77 32 40 08 9b 60 18 d4 aa e2 3f b8 3.a.w2@..`....?. 00:22:29.549 000000a0 6e 62 dd e2 ad 8e 52 43 75 d8 92 ee a8 d7 af 8d nb....RCu....... 00:22:29.549 000000b0 87 66 3a 17 bb 37 55 e5 58 e1 f7 c3 be 64 ee 9c .f:..7U.X....d.. 00:22:29.549 000000c0 99 7a ab 7e 5d 75 4d ef ac 4c 5d e9 fb b4 e3 fd .z.~]uM..L]..... 00:22:29.549 000000d0 d6 e8 8f 87 e9 09 98 26 f8 58 97 fb 9c 21 5f 0b .......&.X...!_. 00:22:29.549 000000e0 ad f7 d4 37 d4 6e 3e 8d 64 a3 19 df 08 bf 91 cd ...7.n>.d....... 00:22:29.549 000000f0 a2 fc 67 80 34 94 9a e2 4a f8 28 d2 2e c2 80 1c ..g.4...J.(..... 00:22:29.549 00000100 51 e2 92 7b e4 73 be 79 09 b5 8b 22 8d 66 a1 00 Q..{.s.y...".f.. 00:22:29.549 00000110 39 e2 e5 d0 cf 14 93 28 97 2b 6f 08 41 38 26 1d 9......(.+o.A8&. 00:22:29.549 00000120 eb af e0 0f e8 84 3f 1d 3c 8b 49 59 e4 f6 3f 10 ......?.<.IY..?. 00:22:29.549 00000130 e4 ac 66 1b cd 61 73 4c 6f 6f 25 74 fa fc 20 3f ..f..asLoo%t.. ? 00:22:29.549 00000140 2e 77 c4 4c a7 60 24 00 37 af b6 61 62 c5 19 94 .w.L.`$.7..ab... 00:22:29.549 00000150 06 0b d9 fc 3c 51 e4 06 e7 1e 71 01 33 9c 28 be ....q..., 00:22:29.549 00000220 7e f9 00 10 6a cf e9 15 cc 87 47 97 21 f3 7e 5a ~...j.....G.!.~Z 00:22:29.549 00000230 e8 57 72 c9 a6 44 f4 d5 3e fd ac 2a cf d6 fe f8 .Wr..D..>..*.... 00:22:29.549 00000240 8e 62 76 db 9e 7f da e9 2d 76 51 95 ff 60 b0 22 .bv.....-vQ..`." 00:22:29.549 00000250 13 e5 a4 36 6a a7 2b 48 54 96 c6 9b b6 96 9c eb ...6j.+HT....... 00:22:29.549 00000260 1b 0c ff 5e a3 3f e2 1f 2f 01 78 f8 12 70 22 c1 ...^.?../.x..p". 00:22:29.549 00000270 22 55 73 90 ce 13 4b 29 12 d5 59 c5 a5 52 4e 5b "Us...K)..Y..RN[ 00:22:29.549 00000280 b4 b7 9d d6 b3 5a 60 a9 cc cb e4 8a a1 72 24 a3 .....Z`......r$. 00:22:29.549 00000290 90 9d 47 45 62 7c 76 04 e5 95 c1 ab 0d 2a eb 93 ..GEb|v......*.. 00:22:29.549 000002a0 ef ca 1c 69 f0 03 fa f6 da 16 9e a1 8b e7 c7 37 ...i...........7 00:22:29.549 000002b0 f2 ce dc 55 1e f1 bf 06 03 52 4d f2 cd 4b 37 86 ...U.....RM..K7. 00:22:29.549 000002c0 51 5b 63 32 44 ea 03 5b 7b e8 61 ce 75 2c 47 98 Q[c2D..[{.a.u,G. 00:22:29.549 000002d0 8d 82 cb 31 e7 f0 44 79 52 ad 52 e7 a4 ca 2c d3 ...1..DyR.R...,. 00:22:29.549 000002e0 2d 69 e5 2c 5c 79 eb a6 b5 59 00 1d 8a ea 3e c0 -i.,\y...Y....>. 00:22:29.549 000002f0 34 40 10 ce 47 63 ef ff a6 da 52 96 6b 8a a8 1b 4@..Gc....R.k... 00:22:29.549 dh secret: 00:22:29.549 00000000 62 34 69 d0 0d 0c a2 4b 50 ce 08 0e b1 9e 7a 80 b4i....KP.....z. 00:22:29.549 00000010 94 c4 35 f3 d6 b3 27 6c cc 04 2b 62 d4 22 d8 c9 ..5...'l..+b.".. 00:22:29.549 00000020 b1 11 c2 fc 4d 32 8a 85 e2 a4 2c 35 6e be 9b 6e ....M2....,5n..n 00:22:29.549 00000030 24 53 f2 f0 33 af 67 47 55 f1 8f b9 b2 22 21 dd $S..3.gGU...."!. 00:22:29.549 00000040 46 e7 fe df d8 8d 12 48 19 cc da fb af dc ab 01 F......H........ 00:22:29.549 00000050 df 57 71 31 b2 2b d5 5b aa 7f 77 ab 49 0e 46 6f .Wq1.+.[..w.I.Fo 00:22:29.549 00000060 80 e2 0c 3f c0 1b 9a d4 61 52 aa e6 47 f2 15 9d ...?....aR..G... 00:22:29.549 00000070 6e 71 d0 a4 56 1b 0b a4 19 6a a1 c3 9a 89 ab 0a nq..V....j...... 00:22:29.549 00000080 73 b9 95 b7 c1 ee 89 75 1d 4d 12 77 22 69 25 3b s......u.M.w"i%; 00:22:29.549 00000090 07 20 59 19 db aa 1c 9d 12 7c 2c 81 ef b0 ee 5f . Y......|,...._ 00:22:29.549 000000a0 e3 cb 12 d1 5e 56 28 a7 16 10 1f 3c 8e eb c0 ea ....^V(....<.... 00:22:29.549 000000b0 39 a5 89 a1 a8 7a 20 0c 2b 05 31 75 c6 e4 32 13 9....z .+.1u..2. 00:22:29.549 000000c0 34 47 d3 23 f2 cc 33 26 82 85 e2 f6 df 5c fc b1 4G.#..3&.....\.. 00:22:29.549 000000d0 dd 84 cd d2 10 03 5f c2 74 ae e8 ff 28 6c cb 19 ......_.t...(l.. 00:22:29.549 000000e0 74 d5 fe 71 ee 48 2c cf be bf 70 7a be 7c 4d 59 t..q.H,...pz.|MY 00:22:29.549 000000f0 1a e7 0c 59 8e c5 f5 6d b0 88 51 4a 71 b1 b9 7e ...Y...m..QJq..~ 00:22:29.549 00000100 b6 e1 82 38 6f b5 19 86 f0 3a 1e 1a 2e 2c 98 07 ...8o....:...,.. 00:22:29.549 00000110 f0 71 ac 3b 0a 41 7b e0 bc 5b 33 af 67 1d 14 53 .q.;.A{..[3.g..S 00:22:29.549 00000120 a9 59 37 ee 9a f7 cb fe d1 6e a8 5e 43 cc ab 24 .Y7......n.^C..$ 00:22:29.549 00000130 04 ad 48 51 e7 54 f8 03 09 6a df 30 87 90 89 00 ..HQ.T...j.0.... 00:22:29.549 00000140 9a 6a 0a 79 9e 93 56 2a 26 9f 46 0d 7b f2 df 02 .j.y..V*&.F.{... 00:22:29.549 00000150 2b b0 89 2f 78 31 09 48 77 cb 4f 49 2c e6 f9 2a +../x1.Hw.OI,..* 00:22:29.549 00000160 58 10 95 cc 3c 24 6d 81 21 fb 55 ea 23 dc c0 88 X...<$m.!.U.#... 00:22:29.549 00000170 95 a0 40 83 e6 e1 8b 67 57 0c e8 96 17 03 96 6e ..@....gW......n 00:22:29.549 00000180 0b 56 76 a2 2b 4b e9 12 29 ff 31 c8 91 15 ce 66 .Vv.+K..).1....f 00:22:29.549 00000190 76 4c 82 d9 59 cb 8b 49 c2 ee b7 8c 9a 02 b6 5d vL..Y..I.......] 00:22:29.549 000001a0 df 08 fe d2 37 4c 33 01 48 18 bb ad 59 fa f9 3e ....7L3.H...Y..> 00:22:29.549 000001b0 f2 94 dd 91 8d 87 8c 5b b4 28 3a aa b0 86 99 d8 .......[.(:..... 00:22:29.549 000001c0 71 03 ca 8c aa 11 a7 a7 df 72 80 fc a3 66 c6 cb q........r...f.. 00:22:29.549 000001d0 bd 5c f7 d8 f1 67 85 a1 8a be 49 77 78 53 74 b0 .\...g....IwxSt. 00:22:29.549 000001e0 55 94 f1 05 60 cc a3 60 89 5a b4 de e0 ac 3b 7c U...`..`.Z....;| 00:22:29.549 000001f0 d7 d6 73 40 a5 15 66 60 33 44 ff 24 f9 41 e7 43 ..s@..f`3D.$.A.C 00:22:29.549 00000200 a2 ec 08 3f 72 01 21 14 08 04 c0 b1 83 a8 b6 54 ...?r.!........T 00:22:29.549 00000210 b8 bc 51 fd 3d b6 6d 08 9d e0 b4 93 7c 19 35 8a ..Q.=.m.....|.5. 00:22:29.549 00000220 b8 48 33 db bc 0a 7f 1b 66 d2 d0 6d f0 c1 7c f9 .H3.....f..m..|. 00:22:29.549 00000230 a0 07 48 6c 29 a8 7c a5 ad 23 95 17 28 57 6a 59 ..Hl).|..#..(WjY 00:22:29.549 00000240 e6 dc c8 93 d2 2b 50 aa e0 4e 6e d9 5d 16 91 93 .....+P..Nn.]... 00:22:29.549 00000250 4b b3 80 ca fc 37 38 fe 59 c2 a1 83 91 af 77 49 K....78.Y.....wI 00:22:29.549 00000260 a5 ef e7 41 ae f1 ba 8e da 83 fa 30 b1 d0 a0 e9 ...A.......0.... 00:22:29.550 00000270 78 0a 7b b4 8c 19 c0 26 e8 25 45 9f d4 71 7c 6d x.{....&.%E..q|m 00:22:29.550 00000280 f8 bd 93 31 04 a4 8b f0 50 5a 24 bc 13 33 5f b4 ...1....PZ$..3_. 00:22:29.550 00000290 76 cf 88 d5 ed 8b 93 ab 37 39 ff a6 a3 23 0c cd v.......79...#.. 00:22:29.550 000002a0 74 98 b6 d5 ee d2 4c 3a ee 11 f2 c2 f2 67 81 51 t.....L:.....g.Q 00:22:29.550 000002b0 4f fa e4 74 55 c8 8f 07 28 54 dc b4 55 6c 4c f4 O..tU...(T..UlL. 00:22:29.550 000002c0 e9 32 05 c6 c4 90 f1 5d 37 0f 25 76 f5 40 02 7b .2.....]7.%v.@.{ 00:22:29.550 000002d0 99 2a 85 a2 bb 2f bf cc e9 e5 b5 b3 7f f4 07 a9 .*.../.......... 00:22:29.550 000002e0 f0 19 fc 26 37 da 39 8b f3 1d 00 cd ae bb ba b4 ...&7.9......... 00:22:29.550 000002f0 c1 b0 1a 86 2d 79 64 dd 6a 04 d9 39 83 89 29 3a ....-yd.j..9..): 00:22:29.550 [2024-09-27 13:27:24.793024] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key0, hash=3, dhgroup=4, seq=3775755303, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.550 [2024-09-27 13:27:24.793403] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.550 [2024-09-27 13:27:24.844568] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.550 [2024-09-27 13:27:24.845097] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.550 [2024-09-27 13:27:24.845342] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.550 [2024-09-27 13:27:24.845500] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.550 [2024-09-27 13:27:24.897320] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.550 [2024-09-27 13:27:24.897546] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.550 [2024-09-27 13:27:24.897752] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.550 [2024-09-27 13:27:24.897895] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.550 [2024-09-27 13:27:24.898103] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.550 ctrlr pubkey: 00:22:29.550 00000000 1e de 53 2b dc 5c 5d 29 64 b5 2b 20 bd 1d b3 e5 ..S+.\])d.+ .... 00:22:29.550 00000010 f2 11 af f3 9b f1 e7 d7 96 3d a7 59 64 22 2f 09 .........=.Yd"/. 00:22:29.550 00000020 3a b4 68 d0 ea 99 af 6c d6 8d c6 7d 97 59 a3 bd :.h....l...}.Y.. 00:22:29.550 00000030 7d 76 4d ca 12 49 30 cf 2b 18 bb 4e 73 39 cb 43 }vM..I0.+..Ns9.C 00:22:29.550 00000040 17 11 da b1 74 b7 20 dd d9 4f a8 3c 80 c4 d8 7a ....t. ..O.<...z 00:22:29.550 00000050 2f a7 a1 04 f8 e1 3c 09 f4 89 61 f7 b6 59 6b b4 /.....<...a..Yk. 00:22:29.550 00000060 55 5a 53 cd d8 6e c2 c0 3a b8 08 ad 3e 8b 91 42 UZS..n..:...>..B 00:22:29.550 00000070 26 62 a1 84 19 f4 62 f2 39 b1 31 82 fc 06 f4 d9 &b....b.9.1..... 00:22:29.550 00000080 9b db 52 71 36 20 a5 e4 19 e1 f5 f0 63 b4 a1 df ..Rq6 ......c... 00:22:29.550 00000090 04 a4 f3 e1 5f de bc 21 0a a7 7d 3f fb 9d f9 97 ...._..!..}?.... 00:22:29.550 000000a0 2e 18 e1 c0 26 ac 2e 5b 42 a6 b2 da 63 a5 c3 d7 ....&..[B...c... 00:22:29.550 000000b0 7e c7 27 c0 fc 0e ea f2 4c 9f de e8 24 9f 11 d4 ~.'.....L...$... 00:22:29.550 000000c0 78 81 86 11 dd cc 46 53 48 62 a9 bc 16 32 c9 92 x.....FSHb...2.. 00:22:29.550 000000d0 ab 3d b2 ab c8 70 01 eb a9 0a 15 fe 4a f6 b9 8c .=...p......J... 00:22:29.550 000000e0 e1 46 f5 32 e7 ce 8e d4 2e 2f a3 40 9c 79 89 43 .F.2...../.@.y.C 00:22:29.550 000000f0 c8 cb af be 8c 72 01 2f dd 0b 01 53 22 6e 70 79 .....r./...S"npy 00:22:29.550 00000100 cb 1b 14 28 45 24 50 31 04 b4 f7 a1 8d 63 ac 1c ...(E$P1.....c.. 00:22:29.550 00000110 b3 fc a3 40 a4 b8 3e 6a 33 b6 33 fb 55 4c 4d dc ...@..>j3.3.ULM. 00:22:29.550 00000120 91 af 4a 20 a6 c7 f7 c9 2f 44 c0 b9 51 22 14 6f ..J ..../D..Q".o 00:22:29.550 00000130 aa bf 11 b1 d7 7c e8 eb 52 1e 26 d9 04 e3 f8 01 .....|..R.&..... 00:22:29.550 00000140 32 11 30 80 f5 b1 6a 40 39 e5 7e 56 c6 a8 1d 67 2.0...j@9.~V...g 00:22:29.550 00000150 74 d7 72 31 24 93 84 14 d6 54 ff 3a 3f 91 3c 7e t.r1$....T.:?.<~ 00:22:29.550 00000160 f9 42 99 8f 78 45 be 4d 27 55 2b 7e 5b 7b 8d f9 .B..xE.M'U+~[{.. 00:22:29.550 00000170 a4 7e 21 22 52 64 8d 23 e9 0e 7a aa b0 2b 19 41 .~!"Rd.#..z..+.A 00:22:29.550 00000180 eb cb aa f7 3b cf 12 9e 43 c3 f5 ff f5 e9 0a 1b ....;...C....... 00:22:29.550 00000190 da 60 c6 db 4f e6 13 49 d3 f2 2d d4 9d 0d dc bf .`..O..I..-..... 00:22:29.550 000001a0 1b 7e 48 58 f4 4d 91 49 ed ef 0c 3b 95 0a b5 d1 .~HX.M.I...;.... 00:22:29.550 000001b0 52 3d 06 82 e9 39 c8 ae 1c c7 b6 50 8a 76 60 3d R=...9.....P.v`= 00:22:29.550 000001c0 f7 f6 e6 c9 b2 53 c4 ff 05 23 04 82 65 6f a5 1d .....S...#..eo.. 00:22:29.550 000001d0 27 5c 44 dc a4 0a 4a 81 21 b1 d4 1c 43 27 ea 0f '\D...J.!...C'.. 00:22:29.550 000001e0 d3 f9 bd c9 75 e0 bf 51 41 01 10 2e c4 86 33 8f ....u..QA.....3. 00:22:29.550 000001f0 23 05 4e f9 e5 62 51 86 db fc 76 8f 80 a6 e4 d1 #.N..bQ...v..... 00:22:29.550 00000200 5a f5 cf f6 12 7d 12 42 1d 42 20 2b 77 c3 55 1c Z....}.B.B +w.U. 00:22:29.550 00000210 21 00 47 bd 3e 4f 48 a6 79 c0 8b aa d5 0b 19 53 !.G.>OH.y......S 00:22:29.550 00000220 36 e0 c8 19 0a 56 ed ab 4b c9 92 a9 65 8b 88 5b 6....V..K...e..[ 00:22:29.550 00000230 be 58 0e 04 7c a1 a2 ae 4f 76 6e 0d 86 c2 62 3d .X..|...Ovn...b= 00:22:29.550 00000240 b8 2d f5 26 26 79 76 b2 91 cd 2c 46 e1 52 ca 56 .-.&&yv...,F.R.V 00:22:29.550 00000250 f2 7d c2 1a 3b 8f 6b f2 85 39 d9 81 c3 ca 61 40 .}..;.k..9....a@ 00:22:29.550 00000260 1a c2 e5 e3 f1 05 4d dd dc ef 6b 47 10 c7 d1 3c ......M...kG...< 00:22:29.550 00000270 02 bb dd 06 84 e7 b3 8d 40 4c 94 da 45 e9 0b a0 ........@L..E... 00:22:29.550 00000280 69 34 03 ae 23 f7 06 cd 63 89 f6 c1 11 18 4b 29 i4..#...c.....K) 00:22:29.550 00000290 84 83 b6 31 74 7f 99 5b b9 47 cd 42 38 be e9 ed ...1t..[.G.B8... 00:22:29.550 000002a0 72 cc 75 4e f7 0d 0e 0b f3 1e d4 ac b4 21 89 99 r.uN.........!.. 00:22:29.550 000002b0 81 4d 83 f2 49 d6 0c 04 32 78 1a d1 37 13 8e 09 .M..I...2x..7... 00:22:29.550 000002c0 cf c4 01 1d 1b 78 8d 55 92 f7 f0 ec 89 dc e6 84 .....x.U........ 00:22:29.550 000002d0 65 f4 e9 42 2c 11 29 12 8f 34 6c c5 58 44 ac 64 e..B,.)..4l.XD.d 00:22:29.550 000002e0 a2 2e 15 fc 20 c4 25 e1 d3 bc 61 25 0c e5 98 01 .... .%...a%.... 00:22:29.550 000002f0 13 23 9e e8 14 5e ab 75 32 e5 a6 98 64 08 21 08 .#...^.u2...d.!. 00:22:29.550 host pubkey: 00:22:29.550 00000000 2b ae 80 3f 79 01 6a 77 5d 64 49 4c fe 9a 94 87 +..?y.jw]dIL.... 00:22:29.550 00000010 89 aa 63 17 b3 c1 94 7e 05 0a e6 9e 62 68 0d 0b ..c....~....bh.. 00:22:29.550 00000020 08 ec 94 83 d7 d5 da d6 81 93 eb 93 36 ef 9f 22 ............6.." 00:22:29.550 00000030 c7 11 13 68 6b af d3 7f 6b ad bb 8c 25 5f 3f 2a ...hk...k...%_?* 00:22:29.550 00000040 53 d9 f1 97 ab c2 62 54 bf 98 1e e2 cb 96 f0 a5 S.....bT........ 00:22:29.550 00000050 db d7 b6 92 3f 8e 70 1a 8e 63 de 8f 5e 28 51 65 ....?.p..c..^(Qe 00:22:29.550 00000060 ea 8a 34 43 81 62 8c 4e 47 65 64 0a 01 74 63 48 ..4C.b.NGed..tcH 00:22:29.550 00000070 e0 af 6e 35 6c 23 51 e0 46 c7 12 04 53 bf de eb ..n5l#Q.F...S... 00:22:29.550 00000080 53 bf 0d c4 6d 57 da 69 34 81 b1 f5 86 5f 36 d5 S...mW.i4...._6. 00:22:29.550 00000090 66 30 9c f3 71 94 8e 4f 70 6d 7e 4b 70 14 ac bf f0..q..Opm~Kp... 00:22:29.550 000000a0 3f d9 f2 9f 7d 1a 17 d7 b5 a4 8e 9a be 32 64 e8 ?...}........2d. 00:22:29.550 000000b0 c3 5b 40 3c 18 f3 3a 34 3d 05 ac 2c 75 2f 9d 35 .[@<..:4=..,u/.5 00:22:29.550 000000c0 44 03 64 4d d8 a1 d5 88 8e 8d 0a 2e 4d ef 82 61 D.dM........M..a 00:22:29.550 000000d0 8b bc ee 79 d2 6b 7a 5e cc 68 4e f4 7d 35 39 df ...y.kz^.hN.}59. 00:22:29.550 000000e0 8c a1 8f 70 fd f7 f2 bb 5d f2 26 02 85 92 67 69 ...p....].&...gi 00:22:29.550 000000f0 5b 65 fd 2d cc 5a db 02 b3 00 2f f1 50 01 e5 e6 [e.-.Z..../.P... 00:22:29.550 00000100 97 56 5a bd 73 58 9f ec 5e 48 fa 8c 35 ee 39 60 .VZ.sX..^H..5.9` 00:22:29.550 00000110 04 7c f1 08 b9 96 a8 44 85 eb 6c 33 29 f6 fe 5a .|.....D..l3)..Z 00:22:29.550 00000120 67 57 a5 ad 28 37 f8 87 e2 2e f6 8d 9c 31 d6 49 gW..(7.......1.I 00:22:29.550 00000130 95 5a 81 6d f4 f0 70 b8 0e 80 b8 aa 18 a9 5a 64 .Z.m..p.......Zd 00:22:29.550 00000140 be 90 09 e3 eb e5 e9 92 14 e6 ce b8 86 c1 de cf ................ 00:22:29.550 00000150 68 d2 95 87 a9 ea cc a2 00 df 9f ed d3 3e 87 7e h............>.~ 00:22:29.550 00000160 05 f0 43 0c 6e 63 43 7e 07 1e 93 49 62 6d 3b 55 ..C.ncC~...Ibm;U 00:22:29.550 00000170 23 96 b6 42 56 af 8c ff c3 8c ab 61 f7 49 fe c1 #..BV......a.I.. 00:22:29.550 00000180 81 d1 20 19 0d c4 7a 0a f6 99 2b 93 f8 44 33 bb .. ...z...+..D3. 00:22:29.550 00000190 e7 1b d3 fc 4f d0 cc cb 77 01 30 2a ae c7 54 13 ....O...w.0*..T. 00:22:29.550 000001a0 91 d2 d5 d7 d8 f1 7d 4d 73 7b e4 4c 4d 10 b6 fd ......}Ms{.LM... 00:22:29.550 000001b0 92 e2 56 9a b2 6c ee 95 d0 74 4b 22 80 55 12 0d ..V..l...tK".U.. 00:22:29.550 000001c0 18 91 d3 48 46 54 52 f4 91 9c 6d 2f 57 e7 c4 be ...HFTR...m/W... 00:22:29.550 000001d0 44 8d 6d fc cb 94 ac 6f e8 38 fc e5 5d 45 d1 16 D.m....o.8..]E.. 00:22:29.550 000001e0 97 03 85 46 fd ce fe c7 fc 2f 14 3c 7b 85 83 f9 ...F...../.<{... 00:22:29.550 000001f0 a7 d7 23 cf ee cd ec 66 26 ff bb 44 05 5b 37 47 ..#....f&..D.[7G 00:22:29.550 00000200 9e ad d7 27 ad 2b e4 1b f4 ea bc 63 b9 3e d9 0e ...'.+.....c.>.. 00:22:29.550 00000210 04 1b ec 82 fc c0 d4 32 0d ee 0a ec 1a 9b 6b cf .......2......k. 00:22:29.550 00000220 3c 8e b7 fb e6 f0 29 ca 56 5e 22 98 c4 19 96 fd <.....).V^"..... 00:22:29.550 00000230 54 b3 22 bb a3 2e 3e 70 06 ec 59 2c d8 b2 4a 1a T."...>p..Y,..J. 00:22:29.550 00000240 9a d0 f9 a8 a9 2c c7 5a dd 42 76 b0 65 33 4c 04 .....,.Z.Bv.e3L. 00:22:29.550 00000250 43 4f 67 a3 39 cf 56 9a 39 0f ab 40 34 ec a4 19 COg.9.V.9..@4... 00:22:29.550 00000260 3a 96 ec 30 ac 18 00 25 0d 74 61 6f c3 1d 41 f2 :..0...%.tao..A. 00:22:29.550 00000270 b6 3c cd 7d 90 1f 50 72 c2 4a d7 63 1b 61 97 07 .<.}..Pr.J.c.a.. 00:22:29.550 00000280 c7 b6 2b a9 dd 22 1b 42 69 bf 42 35 6e 19 a7 b4 ..+..".Bi.B5n... 00:22:29.550 00000290 92 8c bb e1 7c 33 83 7e 75 0e fc 85 96 9b ed 4c ....|3.~u......L 00:22:29.550 000002a0 1a 1f 42 5f a2 45 40 a6 d6 c0 5a 36 21 7d 46 d4 ..B_.E@...Z6!}F. 00:22:29.550 000002b0 b6 08 8e 50 f6 4c cb e6 67 58 ca 0d e9 a5 af 5e ...P.L..gX.....^ 00:22:29.550 000002c0 bc 81 ea de 60 6c d1 e3 f3 43 87 b4 42 09 c7 ec ....`l...C..B... 00:22:29.550 000002d0 ee 18 53 30 3e 87 07 a7 36 ee b4 fd 34 63 45 dd ..S0>...6...4cE. 00:22:29.550 000002e0 5b 76 11 02 9b e5 77 19 22 89 44 aa d7 ba 72 7a [v....w.".D...rz 00:22:29.550 000002f0 2d 32 fa 90 7a 57 5b 83 68 b2 d6 df bb 0a d2 bd -2..zW[.h....... 00:22:29.550 dh secret: 00:22:29.550 00000000 92 5b 94 a5 ed 97 9d c7 a0 6e 1a 3c 47 03 ab 3d .[.......n.u.......) 00:22:29.550 00000070 72 d2 6b 6a 8f f6 b0 01 43 fb 1b 9b 36 30 b7 fc r.kj....C...60.. 00:22:29.550 00000080 4c a5 0e 71 e9 07 51 ac 27 03 96 37 3c 82 8b a8 L..q..Q.'..7<... 00:22:29.550 00000090 54 35 89 70 be b5 90 f3 b0 7d c7 21 9c ba 2f df T5.p.....}.!../. 00:22:29.550 000000a0 73 92 c2 2d b7 14 32 d2 d6 bf 6e 30 84 c2 93 5d s..-..2...n0...] 00:22:29.550 000000b0 2e 5b 5c 42 0f ef 56 d6 ed 3b 87 5e d4 24 47 a8 .[\B..V..;.^.$G. 00:22:29.550 000000c0 21 0d fc 3e 61 f3 e2 88 b9 1e d3 93 2a d5 10 a2 !..>a.......*... 00:22:29.550 000000d0 1f 43 62 cc 66 c9 75 36 43 fd 82 9f 4e 9f 07 48 .Cb.f.u6C...N..H 00:22:29.551 000000e0 d5 4a 95 69 04 53 c4 a7 f3 41 79 95 14 45 2d 51 .J.i.S...Ay..E-Q 00:22:29.551 000000f0 fb 07 bf 6f a3 2a 44 4e b1 d0 ce ec fb 3d 48 f4 ...o.*DN.....=H. 00:22:29.551 00000100 4f 84 18 ab d2 b3 16 b9 d5 53 5e 19 fa 54 63 6e O........S^..Tcn 00:22:29.551 00000110 23 af 13 75 a3 46 b0 23 31 c1 ba 97 d0 0c c2 5f #..u.F.#1......_ 00:22:29.551 00000120 91 6d d4 6d 1f 5b 44 6e e2 52 c3 d3 dc 96 66 87 .m.m.[Dn.R....f. 00:22:29.551 00000130 ea 3e d0 19 9a 62 51 5b 80 ee 13 62 08 a0 88 e0 .>...bQ[...b.... 00:22:29.551 00000140 72 93 ed d1 6a 39 33 79 cf 6e 0b 4e 1d 3a fe ee r...j93y.n.N.:.. 00:22:29.551 00000150 9e c7 e2 4d 6d 5d 22 78 37 4c 8a e4 b8 8f d7 c8 ...Mm]"x7L...... 00:22:29.551 00000160 e3 1e 40 86 11 f7 b0 e6 84 2c 01 1a d6 6e 38 2a ..@......,...n8* 00:22:29.551 00000170 bd 0c e7 b0 4d 6c 91 d4 cc d8 03 ce 6f e1 7c 46 ....Ml......o.|F 00:22:29.551 00000180 43 16 bc b0 3e 72 fc 13 3c ec 28 40 ad cf c5 cf C...>r..<.(@.... 00:22:29.551 00000190 93 59 18 9d f2 18 44 12 b9 5c 61 ec cc 3d 69 25 .Y....D..\a..=i% 00:22:29.551 000001a0 64 1d ff 0e 31 73 e5 f3 73 df 00 f6 28 16 f3 c1 d...1s..s...(... 00:22:29.551 000001b0 72 fe a4 22 40 8a f3 42 6e 0f 0b bc 5c 41 db 30 r.."@..Bn...\A.0 00:22:29.551 000001c0 97 e7 04 9a 63 d6 79 b5 86 08 ea d5 a5 13 3f 92 ....c.y.......?. 00:22:29.551 000001d0 3d d2 01 40 e8 59 c2 a8 ce d8 45 b2 6e 50 59 18 =..@.Y....E.nPY. 00:22:29.551 000001e0 aa a2 8b cb 8b 65 d7 bd 7c b0 86 32 c2 7e 4c de .....e..|..2.~L. 00:22:29.551 000001f0 c6 08 05 69 56 33 4e d0 5a 9d ad 3a 21 a5 c9 6d ...iV3N.Z..:!..m 00:22:29.551 00000200 95 84 7c 49 cd 61 1e f7 6b d8 8c 73 ea cb 05 5f ..|I.a..k..s..._ 00:22:29.551 00000210 dd 1c a4 f7 47 9f 71 de e6 96 54 58 36 60 32 dc ....G.q...TX6`2. 00:22:29.551 00000220 84 3b 51 3b 09 4c 36 f8 85 f6 b0 6a e4 e0 36 da .;Q;.L6....j..6. 00:22:29.551 00000230 17 2c 14 f4 b5 be 48 21 b5 24 5c e3 e7 66 af 2f .,....H!.$\..f./ 00:22:29.551 00000240 cd c8 f8 da 29 90 31 a5 26 63 2b 54 90 15 11 c4 ....).1.&c+T.... 00:22:29.551 00000250 8f 61 d8 42 cf 27 eb ec bf f2 20 55 17 40 92 d1 .a.B.'.... U.@.. 00:22:29.551 00000260 d6 86 76 52 d5 e1 68 a4 6e ed 28 54 ea 10 86 ab ..vR..h.n.(T.... 00:22:29.551 00000270 2d 0a d5 e1 e9 85 3e 60 93 17 47 ad ff 30 b0 ed -.....>`..G..0.. 00:22:29.551 00000280 5e 5a 54 ef 27 c1 70 78 9c 0a 20 70 56 19 a4 62 ^ZT.'.px.. pV..b 00:22:29.551 00000290 42 6f e2 42 8e 45 58 3e 1d dc ef 1f bd dd 0e d3 Bo.B.EX>........ 00:22:29.551 000002a0 91 4e 26 34 9a 32 2b dc ff f5 53 56 5f fd ec a1 .N&4.2+...SV_... 00:22:29.551 000002b0 01 65 3e 6b 20 0a 7e 7d 6b 29 41 c6 1b 98 98 35 .e>k .~}k)A....5 00:22:29.551 000002c0 72 78 3a 39 c7 65 8c 2c 9a e9 de 5e c8 e4 10 d2 rx:9.e.,...^.... 00:22:29.551 000002d0 08 1a 6d ac 2a 6b bc 32 eb 68 c9 a3 db c3 74 b1 ..m.*k.2.h....t. 00:22:29.551 000002e0 af bf ef 5d 22 98 b1 a1 35 ce b5 25 91 d8 9c c1 ...]"...5..%.... 00:22:29.551 000002f0 8f 7b 17 c9 0a 1e 90 6a bb 9d 28 91 cf 68 ea 96 .{.....j..(..h.. 00:22:29.551 [2024-09-27 13:27:24.972350] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key0, hash=3, dhgroup=4, seq=3775755304, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.551 [2024-09-27 13:27:24.972747] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.551 [2024-09-27 13:27:25.024132] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.551 [2024-09-27 13:27:25.024758] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.551 [2024-09-27 13:27:25.024953] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.551 [2024-09-27 13:27:25.025249] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.551 [2024-09-27 13:27:25.155302] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.551 [2024-09-27 13:27:25.155712] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.551 [2024-09-27 13:27:25.155859] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.551 [2024-09-27 13:27:25.156007] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.551 [2024-09-27 13:27:25.156274] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.551 ctrlr pubkey: 00:22:29.551 00000000 32 7d 72 6d 48 9b 31 d0 39 69 8e 79 c8 1b 7c 93 2}rmH.1.9i.y..|. 00:22:29.551 00000010 3f db a4 23 a0 bc b0 18 84 01 48 39 a9 4f 18 2f ?..#......H9.O./ 00:22:29.551 00000020 44 f8 ec 24 35 88 5f 05 99 14 0b 2e 01 3c 97 87 D..$5._......<.. 00:22:29.551 00000030 de 3f 70 b0 1e cd 95 d9 51 73 a3 81 60 9d 13 69 .?p.....Qs..`..i 00:22:29.551 00000040 c2 85 44 77 f6 06 3d b1 4e 5e 39 7e ad 6f 8c 96 ..Dw..=.N^9~.o.. 00:22:29.551 00000050 c4 3a c2 11 43 e2 3d 3a c2 bc 73 ff 02 14 b6 1e .:..C.=:..s..... 00:22:29.551 00000060 03 c1 7b fb e1 17 bf 48 cf a3 e4 d3 16 54 5f 72 ..{....H.....T_r 00:22:29.551 00000070 67 45 47 01 9f fe 5b 46 ca cf 50 5d aa e3 2c 4b gEG...[F..P]..,K 00:22:29.551 00000080 28 5d 41 bd a9 a7 b0 41 92 3c ee 03 84 24 15 82 (]A....A.<...$.. 00:22:29.551 00000090 72 6b 18 26 e9 d3 ad 59 14 8c a9 75 12 0b a6 4e rk.&...Y...u...N 00:22:29.551 000000a0 04 68 ad 3a 9e eb 10 c4 21 d9 70 85 d9 6d 0d 04 .h.:....!.p..m.. 00:22:29.551 000000b0 28 e8 9d 90 16 88 2d ea 0c 72 de 8e 99 fa 72 0a (.....-..r....r. 00:22:29.551 000000c0 4b ff 9d d9 0c 6b b5 4f 7b 6c 61 87 80 e1 9e 6b K....k.O{la....k 00:22:29.551 000000d0 b3 9a ef 33 42 7e 64 6f e1 ac be fc 4d fb 79 99 ...3B~do....M.y. 00:22:29.551 000000e0 b9 99 09 3d eb b1 5b ca 3b 98 1d 3c 51 44 55 c4 ...=..[.;..y...r 00:22:29.551 00000120 61 a6 6f 82 e0 43 5c 59 61 19 3d b8 e1 06 2e 40 a.o..C\Ya.=....@ 00:22:29.551 00000130 f5 72 58 5a 38 0c db a4 62 8e 09 ba 93 b3 a2 34 .rXZ8...b......4 00:22:29.551 00000140 12 b6 e8 53 14 3d 8f 35 e1 d8 a5 6c 9c 2a c9 c4 ...S.=.5...l.*.. 00:22:29.551 00000150 94 f6 4a 7c bb b0 c8 05 af ae c9 38 54 b2 ee 02 ..J|.......8T... 00:22:29.551 00000160 21 31 93 d5 ad 32 e0 47 5a bc 58 a2 18 47 f6 44 !1...2.GZ.X..G.D 00:22:29.551 00000170 f0 2b ae ae 51 5e b7 39 67 9a 7c 9f b8 3b 37 8b .+..Q^.9g.|..;7. 00:22:29.551 00000180 89 87 fc b9 c6 2a 92 ab 03 32 62 8a e5 5f ca 7c .....*...2b.._.| 00:22:29.551 00000190 65 e2 21 63 67 96 5f aa e2 b3 2e 4b ea a5 3a d9 e.!cg._....K..:. 00:22:29.551 000001a0 5b 54 7f 06 74 ee 91 a8 a0 0a 2a 82 ca 3a f2 5a [T..t.....*..:.Z 00:22:29.551 000001b0 1f 7e 7b c3 28 19 06 e4 0f 21 9b 1c 85 20 b5 18 .~{.(....!... .. 00:22:29.551 000001c0 58 f0 58 91 e0 03 a6 01 ab 02 89 2c ba 08 3a 30 X.X........,..:0 00:22:29.551 000001d0 93 07 9f 03 0b 45 12 89 9a 64 20 8e f3 85 54 0b .....E...d ...T. 00:22:29.551 000001e0 2e f1 bb 06 f3 ae 0f 2e a3 bf fd 0c 43 72 5b d7 ............Cr[. 00:22:29.551 000001f0 1a d1 df 45 88 b8 98 e1 bc 0d e4 e1 6b 77 a6 01 ...E........kw.. 00:22:29.551 00000200 d4 fc 7e 7d 29 ee fe 25 cf 5f 12 92 10 4b 21 d7 ..~})..%._...K!. 00:22:29.551 00000210 75 87 5a 5f 28 52 75 eb e3 85 5d c1 ec bd 77 66 u.Z_(Ru...]...wf 00:22:29.551 00000220 ff eb 4a 2e 29 13 ea 7b b8 c0 6f 2c 47 56 45 51 ..J.)..{..o,GVEQ 00:22:29.551 00000230 67 fe 43 ca 2e 67 7b 6c 56 1e 66 21 7f 60 5b b1 g.C..g{lV.f!.`[. 00:22:29.551 00000240 aa 9c f5 61 b9 97 fc ff ea 41 6d 40 67 39 8a ec ...a.....Am@g9.. 00:22:29.551 00000250 42 7a 94 77 1e 25 0e f0 f6 34 9c ef 67 17 b3 3f Bz.w.%...4..g..? 00:22:29.551 00000260 56 2d 03 cd 7e 1a c1 df 12 10 11 bf bc 4b e6 22 V-..~........K." 00:22:29.551 00000270 ef 8d 43 24 25 36 91 3b 65 63 2b b7 c5 04 c1 a0 ..C$%6.;ec+..... 00:22:29.551 00000280 2b bd da 26 14 c7 e0 3c 08 4d 9d 20 c6 11 3b d2 +..&...<.M. ..;. 00:22:29.551 00000290 31 81 4a 99 d6 d9 b5 48 aa d5 a5 d4 4d 22 e2 78 1.J....H....M".x 00:22:29.551 000002a0 10 b8 4c 2e f1 12 dc f3 bd 63 12 d6 65 e4 be ab ..L......c..e... 00:22:29.551 000002b0 fe c3 15 91 bc 3a fe 83 af 8d b9 e4 67 d0 42 b5 .....:......g.B. 00:22:29.551 000002c0 61 00 39 b9 90 64 0c d2 f4 be 14 0e cc 05 e6 ce a.9..d.......... 00:22:29.551 000002d0 86 01 92 32 19 96 f9 5d 67 b2 1f 69 eb 78 4d 5a ...2...]g..i.xMZ 00:22:29.551 000002e0 c2 5d 3a 2f cd 86 9b 6d 8a f4 62 85 05 40 b5 0a .]:/...m..b..@.. 00:22:29.551 000002f0 0d 15 41 6b c1 b2 c9 73 32 bb f8 f0 62 e0 d5 a7 ..Ak...s2...b... 00:22:29.551 host pubkey: 00:22:29.551 00000000 e5 bf f7 4c 2f f0 e9 5a 84 ec 85 63 9a e1 c6 64 ...L/..Z...c...d 00:22:29.551 00000010 7a 4b 3d ff 69 6d 1c f0 9e c6 6a ed 3c d8 e7 1c zK=.im....j.<... 00:22:29.551 00000020 25 32 a2 68 5b 3c 3e 91 37 ba e5 66 e5 1f 07 57 %2.h[<>.7..f...W 00:22:29.551 00000030 4e 31 c9 c9 46 41 40 2a 04 67 6e f1 0d 5b 7f 46 N1..FA@*.gn..[.F 00:22:29.551 00000040 92 e5 d2 69 11 a4 14 85 94 d5 7c 2a 37 1e 59 5d ...i......|*7.Y] 00:22:29.551 00000050 71 7a 0d a5 ed 46 fe 4b 77 5f 6f 26 e3 ab e8 40 qz...F.Kw_o&...@ 00:22:29.551 00000060 8e 9e 05 a1 9f 96 9b e5 42 6c e0 3a 8a fd 23 4d ........Bl.:..#M 00:22:29.551 00000070 e8 59 30 03 cd 71 bd 5a c1 88 67 d8 e1 53 dd 59 .Y0..q.Z..g..S.Y 00:22:29.551 00000080 2a 4a 42 a2 72 5e cf d0 6e ec a9 ef e2 45 c7 d4 *JB.r^..n....E.. 00:22:29.551 00000090 ef 76 db 53 04 f2 2b 37 12 ae ca 77 06 1c a4 25 .v.S..+7...w...% 00:22:29.551 000000a0 fb cb 16 5d b1 44 a9 12 b0 48 66 55 90 e0 ed 61 ...].D...HfU...a 00:22:29.551 000000b0 1e 20 a0 ef 1f 5e b8 99 b6 65 2e aa 7c 47 c2 ec . ...^...e..|G.. 00:22:29.551 000000c0 ba de 57 0c 25 45 c6 13 24 3c ff 52 f6 43 5d 3d ..W.%E..$<.R.C]= 00:22:29.551 000000d0 b1 86 1b 63 2e 48 96 a8 03 a6 3a 65 df fa 60 27 ...c.H....:e..`' 00:22:29.551 000000e0 75 c4 83 df 7b 7e 35 62 32 ef 6c 56 a1 84 97 d2 u...{~5b2.lV.... 00:22:29.551 000000f0 72 63 42 0c a7 31 ac af df 2f 39 82 58 8c b1 8d rcB..1.../9.X... 00:22:29.551 00000100 20 02 25 f5 81 5c 90 f7 37 ae bc 36 5d e5 03 ea .%..\..7..6]... 00:22:29.551 00000110 4c ca a1 e0 38 81 a0 e2 05 65 be 69 38 37 0b 17 L...8....e.i87.. 00:22:29.551 00000120 64 1b 20 59 17 0e 40 6c 60 e7 0f 55 6a c6 56 4d d. Y..@l`..Uj.VM 00:22:29.551 00000130 33 4e 81 3a 3f 34 76 9f ec 9c 45 70 bf f0 0e bf 3N.:?4v...Ep.... 00:22:29.551 00000140 97 f9 8e c0 53 a2 89 03 50 53 0e b2 d8 0f 2c c7 ....S...PS....,. 00:22:29.551 00000150 56 12 30 0d c8 0b f5 95 41 c3 e5 16 f2 6e e8 69 V.0.....A....n.i 00:22:29.551 00000160 6f 5d a1 7e df 7c da f6 c5 b6 0d fd fa 08 64 de o].~.|........d. 00:22:29.551 00000170 41 7d 8a 82 24 94 cd 3f 6b 1b 23 c9 8a a7 73 cb A}..$..?k.#...s. 00:22:29.551 00000180 25 31 40 3a 2b 31 ae 00 a6 c0 c5 5a 04 b8 5a fb %1@:+1.....Z..Z. 00:22:29.551 00000190 48 58 29 52 33 19 84 0f 01 e1 76 36 d3 0c aa dc HX)R3.....v6.... 00:22:29.551 000001a0 f6 9e 4c a5 3b 22 dd c0 98 6b 8a 3a a7 ff 5a 58 ..L.;"...k.:..ZX 00:22:29.551 000001b0 42 8e 58 9e 2e c1 bf 26 56 50 e3 38 45 d6 d1 9b B.X....&VP.8E... 00:22:29.551 000001c0 6d 4e 2f 00 48 79 30 21 62 bd 71 35 de 10 04 77 mN/.Hy0!b.q5...w 00:22:29.551 000001d0 7b dd f4 c5 07 a2 57 0e 17 a3 8f 20 3b eb cd bd {.....W.... ;... 00:22:29.551 000001e0 2b 59 b0 42 b6 1c 3b c1 fd 56 43 97 67 cd c2 2c +Y.B..;..VC.g.., 00:22:29.551 000001f0 c5 28 2a 34 04 f5 01 db 95 e9 54 0f ee 4d 03 d9 .(*4......T..M.. 00:22:29.551 00000200 06 5c f2 68 97 ec a7 8a c8 39 a7 56 3c b3 f2 0e .\.h.....9.V<... 00:22:29.551 00000210 ca 5b d3 53 5b 5b b3 4a 1e ac 39 85 78 b5 49 72 .[.S[[.J..9.x.Ir 00:22:29.551 00000220 6f bb 87 18 de 58 b6 20 e8 4c 9b 32 5f 11 96 30 o....X. .L.2_..0 00:22:29.551 00000230 fc e6 d7 08 c4 ec 2b 73 09 f8 c9 1b 24 08 d9 50 ......+s....$..P 00:22:29.551 00000240 77 02 a7 65 f6 de 50 df 44 40 34 2a 53 0f 72 10 w..e..P.D@4*S.r. 00:22:29.551 00000250 18 39 ef 0e 4d aa 59 e0 11 86 07 06 48 47 37 07 .9..M.Y.....HG7. 00:22:29.551 00000260 c7 b9 cf 7b f6 ff fd cc 41 bf 37 0b 61 81 f6 00 ...{....A.7.a... 00:22:29.551 00000270 5b dd 47 bd cc d4 92 f3 d1 d7 9f 44 b2 8b ec b5 [.G........D.... 00:22:29.551 00000280 3b 1c fc 7b 37 fc 5e 79 ad d6 21 50 eb b7 99 85 ;..{7.^y..!P.... 00:22:29.551 00000290 6e 67 ef 81 2a 60 33 a1 e0 84 0c d8 42 e8 77 b5 ng..*`3.....B.w. 00:22:29.551 000002a0 85 8c 4a 86 dc e4 29 68 93 c3 45 8e 2f de 64 93 ..J...)h..E./.d. 00:22:29.551 000002b0 23 20 ea 12 e7 03 52 b1 db 14 82 7c d2 80 1c bb # ....R....|.... 00:22:29.551 000002c0 23 c1 b3 f1 25 28 e4 0f cd 92 bf 7f e7 6b 65 41 #...%(.......keA 00:22:29.551 000002d0 47 51 32 aa b9 f6 e4 71 a8 40 59 a9 00 6d 45 14 GQ2....q.@Y..mE. 00:22:29.551 000002e0 30 58 36 7f a9 9f d7 c7 d5 35 6c b7 69 0e d3 58 0X6......5l.i..X 00:22:29.552 000002f0 12 a6 b5 de a3 cf eb e4 36 b9 a4 8c a2 10 fb a1 ........6....... 00:22:29.552 dh secret: 00:22:29.552 00000000 2f 2e d4 2f f9 1d e5 24 04 b0 7d f8 77 43 c1 cc /../...$..}.wC.. 00:22:29.552 00000010 43 1a 3d 0f bf 0b 8b 07 77 81 34 eb 1d c2 5e 69 C.=.....w.4...^i 00:22:29.552 00000020 91 e6 00 25 35 a2 30 f6 ba ad d1 55 1c 52 85 bb ...%5.0....U.R.. 00:22:29.552 00000030 55 71 6d e1 ab 8d c0 f4 0b ef 93 01 f4 2c 2b 23 Uqm..........,+# 00:22:29.552 00000040 46 59 88 78 bb b8 38 df 49 a4 30 e5 e4 aa a1 6b FY.x..8.I.0....k 00:22:29.552 00000050 52 12 37 7b e3 6d 7f 50 0f 3d a8 ab a9 05 98 1f R.7{.m.P.=...... 00:22:29.552 00000060 e3 d4 c7 63 a9 86 77 60 d6 41 80 57 13 d6 c4 cb ...c..w`.A.W.... 00:22:29.552 00000070 35 8b 0e 37 26 fc 22 1f 16 2a 53 ea 70 46 af 0b 5..7&."..*S.pF.. 00:22:29.552 00000080 e7 a7 bd ba c7 1e 1b 9f 8e 62 0d b7 d7 31 f9 c0 .........b...1.. 00:22:29.552 00000090 a9 10 28 ef d5 de fe e5 15 23 b1 3d ef 9d 86 d3 ..(......#.=.... 00:22:29.552 000000a0 37 7e c3 ac 6b c5 62 80 b6 20 23 dd 02 62 e3 ec 7~..k.b.. #..b.. 00:22:29.552 000000b0 4d e3 3f d4 8a bc 1d fd 9f 93 b8 80 fb b1 9e e1 M.?............. 00:22:29.552 000000c0 a5 15 eb 34 2d 8f f5 83 e6 b1 78 ef e9 81 c2 60 ...4-.....x....` 00:22:29.552 000000d0 7a bd 74 d1 a5 30 4e 7c 6a e5 70 91 0d f2 4d 41 z.t..0N|j.p...MA 00:22:29.552 000000e0 82 b2 55 84 8f ac d4 d7 1d 8c 13 7e 96 b1 59 84 ..U........~..Y. 00:22:29.552 000000f0 71 37 3f 8d 31 55 5c 7a 65 1b c5 6d be ef f9 ec q7?.1U\ze..m.... 00:22:29.552 00000100 13 24 61 f2 bd 2e cd 7a 47 80 28 8e 49 75 dd 61 .$a....zG.(.Iu.a 00:22:29.552 00000110 e7 64 c8 41 eb c4 f7 4d 86 91 4e 0c c4 8b 83 dc .d.A...M..N..... 00:22:29.552 00000120 2c ae 90 8e 8c 1f 77 8b 91 97 5f a0 27 32 9a ea ,.....w..._.'2.. 00:22:29.552 00000130 ce 1d 8a f1 b5 73 50 c6 35 24 f1 60 ff 6f 98 75 .....sP.5$.`.o.u 00:22:29.552 00000140 5a 59 b6 2e b6 1f 4d a1 f4 6a 1c d2 c2 31 28 d2 ZY....M..j...1(. 00:22:29.552 00000150 cb 19 1e 0f 1f 6a db 4e 5c 4d 60 9f 07 cc 8c 67 .....j.N\M`....g 00:22:29.552 00000160 cc 3c ed 55 e8 a7 5a 6f c6 3f 75 d9 19 e2 e9 c9 .<.U..Zo.?u..... 00:22:29.552 00000170 17 64 b8 8f 68 6d 88 12 5c 64 ce bf 59 2b a8 a8 .d..hm..\d..Y+.. 00:22:29.552 00000180 c8 5f 03 cf 78 eb 86 e9 a7 b2 f9 db a5 4f b7 e7 ._..x........O.. 00:22:29.552 00000190 4e 65 be 5d 70 2f ae 2b e6 53 bc ad 2a a3 1f 05 Ne.]p/.+.S..*... 00:22:29.552 000001a0 67 0c bd c5 2d 73 a8 55 83 f8 93 eb 0c 53 81 cf g...-s.U.....S.. 00:22:29.552 000001b0 0b 4c 91 f8 ba 13 f4 be 19 23 0e 6f b3 a5 bb 81 .L.......#.o.... 00:22:29.552 000001c0 ed 1d 22 26 fa 9a 21 8e 86 2b 5d d4 fa c7 3c 30 .."&..!..+]...<0 00:22:29.552 000001d0 b6 26 38 92 b8 56 1c 1f 44 da 73 fb 7c a2 2e 1f .&8..V..D.s.|... 00:22:29.552 000001e0 d8 0c 87 a0 7c 0e 37 94 c7 08 20 a9 97 c6 d3 2a ....|.7... ....* 00:22:29.552 000001f0 92 43 1f c1 e4 61 c1 62 62 b7 b4 f5 ea ae 34 78 .C...a.bb.....4x 00:22:29.552 00000200 3d aa 7d 9b 6f 32 e3 be d9 f8 f3 bc c6 d2 dc 99 =.}.o2.......... 00:22:29.552 00000210 74 69 14 24 6f 07 1e 3e f9 3f 51 8d 26 f1 bd e8 ti.$o..>.?Q.&... 00:22:29.552 00000220 7d 43 b8 35 1f 47 c0 84 82 fe 49 07 99 af 58 58 }C.5.G....I...XX 00:22:29.552 00000230 47 87 88 d6 f8 ff 11 af a6 c6 f8 ac 8b 5d 93 a9 G............].. 00:22:29.552 00000240 d1 5e 69 69 3b b0 4a ff b5 5a 7f 03 24 40 12 9d .^ii;.J..Z..$@.. 00:22:29.552 00000250 3b 01 8e a9 f8 dc 36 64 a2 2e 0d c3 f7 f2 00 40 ;.....6d.......@ 00:22:29.552 00000260 3a 85 e9 aa 74 cc 4e 91 97 98 c7 33 52 23 f5 a1 :...t.N....3R#.. 00:22:29.552 00000270 9e 6f 3d c7 d7 91 eb 4c 8b fc 35 12 4a b1 b7 5c .o=....L..5.J..\ 00:22:29.552 00000280 89 0f 60 51 f5 9f c4 d5 87 36 2c 22 80 26 ab 3e ..`Q.....6,".&.> 00:22:29.552 00000290 e5 76 79 09 e0 71 58 07 2c 37 fc 23 d3 f1 3f 1b .vy..qX.,7.#..?. 00:22:29.552 000002a0 d1 20 92 df a6 73 33 16 4a d6 30 bf 33 6e 20 65 . ...s3.J.0.3n e 00:22:29.552 000002b0 2a ae 30 9c 30 79 80 89 9b 85 30 39 3d 8d b0 dc *.0.0y....09=... 00:22:29.552 000002c0 1c a8 70 d5 a8 e7 6f 79 ad 65 17 a7 7f 0c 3f cb ..p...oy.e....?. 00:22:29.552 000002d0 6c c5 bc c1 23 98 70 db b6 10 e9 d3 8e 4d ff f7 l...#.p......M.. 00:22:29.552 000002e0 4e ad 2e db ba 11 78 b8 0b 12 cb 1f 11 52 74 75 N.....x......Rtu 00:22:29.552 000002f0 45 05 c9 af e0 62 07 bd e3 26 66 43 24 50 06 8a E....b...&fC$P.. 00:22:29.552 [2024-09-27 13:27:25.252921] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=3, dhgroup=4, seq=3775755305, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.552 [2024-09-27 13:27:25.253297] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.552 [2024-09-27 13:27:25.318724] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.552 [2024-09-27 13:27:25.319319] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.552 [2024-09-27 13:27:25.319520] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.552 [2024-09-27 13:27:25.319766] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.552 [2024-09-27 13:27:25.371995] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.552 [2024-09-27 13:27:25.372210] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.552 [2024-09-27 13:27:25.372345] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.552 [2024-09-27 13:27:25.372543] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.552 [2024-09-27 13:27:25.372839] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.552 ctrlr pubkey: 00:22:29.552 00000000 32 7d 72 6d 48 9b 31 d0 39 69 8e 79 c8 1b 7c 93 2}rmH.1.9i.y..|. 00:22:29.552 00000010 3f db a4 23 a0 bc b0 18 84 01 48 39 a9 4f 18 2f ?..#......H9.O./ 00:22:29.552 00000020 44 f8 ec 24 35 88 5f 05 99 14 0b 2e 01 3c 97 87 D..$5._......<.. 00:22:29.552 00000030 de 3f 70 b0 1e cd 95 d9 51 73 a3 81 60 9d 13 69 .?p.....Qs..`..i 00:22:29.552 00000040 c2 85 44 77 f6 06 3d b1 4e 5e 39 7e ad 6f 8c 96 ..Dw..=.N^9~.o.. 00:22:29.552 00000050 c4 3a c2 11 43 e2 3d 3a c2 bc 73 ff 02 14 b6 1e .:..C.=:..s..... 00:22:29.552 00000060 03 c1 7b fb e1 17 bf 48 cf a3 e4 d3 16 54 5f 72 ..{....H.....T_r 00:22:29.552 00000070 67 45 47 01 9f fe 5b 46 ca cf 50 5d aa e3 2c 4b gEG...[F..P]..,K 00:22:29.552 00000080 28 5d 41 bd a9 a7 b0 41 92 3c ee 03 84 24 15 82 (]A....A.<...$.. 00:22:29.552 00000090 72 6b 18 26 e9 d3 ad 59 14 8c a9 75 12 0b a6 4e rk.&...Y...u...N 00:22:29.552 000000a0 04 68 ad 3a 9e eb 10 c4 21 d9 70 85 d9 6d 0d 04 .h.:....!.p..m.. 00:22:29.552 000000b0 28 e8 9d 90 16 88 2d ea 0c 72 de 8e 99 fa 72 0a (.....-..r....r. 00:22:29.552 000000c0 4b ff 9d d9 0c 6b b5 4f 7b 6c 61 87 80 e1 9e 6b K....k.O{la....k 00:22:29.552 000000d0 b3 9a ef 33 42 7e 64 6f e1 ac be fc 4d fb 79 99 ...3B~do....M.y. 00:22:29.552 000000e0 b9 99 09 3d eb b1 5b ca 3b 98 1d 3c 51 44 55 c4 ...=..[.;..y...r 00:22:29.552 00000120 61 a6 6f 82 e0 43 5c 59 61 19 3d b8 e1 06 2e 40 a.o..C\Ya.=....@ 00:22:29.552 00000130 f5 72 58 5a 38 0c db a4 62 8e 09 ba 93 b3 a2 34 .rXZ8...b......4 00:22:29.552 00000140 12 b6 e8 53 14 3d 8f 35 e1 d8 a5 6c 9c 2a c9 c4 ...S.=.5...l.*.. 00:22:29.552 00000150 94 f6 4a 7c bb b0 c8 05 af ae c9 38 54 b2 ee 02 ..J|.......8T... 00:22:29.552 00000160 21 31 93 d5 ad 32 e0 47 5a bc 58 a2 18 47 f6 44 !1...2.GZ.X..G.D 00:22:29.552 00000170 f0 2b ae ae 51 5e b7 39 67 9a 7c 9f b8 3b 37 8b .+..Q^.9g.|..;7. 00:22:29.552 00000180 89 87 fc b9 c6 2a 92 ab 03 32 62 8a e5 5f ca 7c .....*...2b.._.| 00:22:29.552 00000190 65 e2 21 63 67 96 5f aa e2 b3 2e 4b ea a5 3a d9 e.!cg._....K..:. 00:22:29.552 000001a0 5b 54 7f 06 74 ee 91 a8 a0 0a 2a 82 ca 3a f2 5a [T..t.....*..:.Z 00:22:29.552 000001b0 1f 7e 7b c3 28 19 06 e4 0f 21 9b 1c 85 20 b5 18 .~{.(....!... .. 00:22:29.552 000001c0 58 f0 58 91 e0 03 a6 01 ab 02 89 2c ba 08 3a 30 X.X........,..:0 00:22:29.552 000001d0 93 07 9f 03 0b 45 12 89 9a 64 20 8e f3 85 54 0b .....E...d ...T. 00:22:29.552 000001e0 2e f1 bb 06 f3 ae 0f 2e a3 bf fd 0c 43 72 5b d7 ............Cr[. 00:22:29.552 000001f0 1a d1 df 45 88 b8 98 e1 bc 0d e4 e1 6b 77 a6 01 ...E........kw.. 00:22:29.552 00000200 d4 fc 7e 7d 29 ee fe 25 cf 5f 12 92 10 4b 21 d7 ..~})..%._...K!. 00:22:29.552 00000210 75 87 5a 5f 28 52 75 eb e3 85 5d c1 ec bd 77 66 u.Z_(Ru...]...wf 00:22:29.552 00000220 ff eb 4a 2e 29 13 ea 7b b8 c0 6f 2c 47 56 45 51 ..J.)..{..o,GVEQ 00:22:29.552 00000230 67 fe 43 ca 2e 67 7b 6c 56 1e 66 21 7f 60 5b b1 g.C..g{lV.f!.`[. 00:22:29.552 00000240 aa 9c f5 61 b9 97 fc ff ea 41 6d 40 67 39 8a ec ...a.....Am@g9.. 00:22:29.552 00000250 42 7a 94 77 1e 25 0e f0 f6 34 9c ef 67 17 b3 3f Bz.w.%...4..g..? 00:22:29.552 00000260 56 2d 03 cd 7e 1a c1 df 12 10 11 bf bc 4b e6 22 V-..~........K." 00:22:29.552 00000270 ef 8d 43 24 25 36 91 3b 65 63 2b b7 c5 04 c1 a0 ..C$%6.;ec+..... 00:22:29.552 00000280 2b bd da 26 14 c7 e0 3c 08 4d 9d 20 c6 11 3b d2 +..&...<.M. ..;. 00:22:29.552 00000290 31 81 4a 99 d6 d9 b5 48 aa d5 a5 d4 4d 22 e2 78 1.J....H....M".x 00:22:29.552 000002a0 10 b8 4c 2e f1 12 dc f3 bd 63 12 d6 65 e4 be ab ..L......c..e... 00:22:29.552 000002b0 fe c3 15 91 bc 3a fe 83 af 8d b9 e4 67 d0 42 b5 .....:......g.B. 00:22:29.552 000002c0 61 00 39 b9 90 64 0c d2 f4 be 14 0e cc 05 e6 ce a.9..d.......... 00:22:29.552 000002d0 86 01 92 32 19 96 f9 5d 67 b2 1f 69 eb 78 4d 5a ...2...]g..i.xMZ 00:22:29.552 000002e0 c2 5d 3a 2f cd 86 9b 6d 8a f4 62 85 05 40 b5 0a .]:/...m..b..@.. 00:22:29.552 000002f0 0d 15 41 6b c1 b2 c9 73 32 bb f8 f0 62 e0 d5 a7 ..Ak...s2...b... 00:22:29.552 host pubkey: 00:22:29.552 00000000 64 1c f3 53 ea 11 ad 6f b0 74 0d f6 e1 ad ee b1 d..S...o.t...... 00:22:29.552 00000010 83 f5 41 e5 d5 e7 00 5b d7 ae fd 49 f5 14 24 76 ..A....[...I..$v 00:22:29.552 00000020 ac 94 d8 10 0c 96 61 82 52 6b f5 3f a6 05 7e 44 ......a.Rk.?..~D 00:22:29.552 00000030 0f 55 84 4b df 5c aa fa 9d 44 37 00 d7 4b 89 6b .U.K.\...D7..K.k 00:22:29.552 00000040 ab f9 16 17 84 7d ea 81 1a 3c 4e 7f 5c b8 1b 19 .....}...]... 00:22:29.552 00000230 83 6a ef 80 88 af 5f f7 f2 d4 8e 34 5c 09 29 2d .j...._....4\.)- 00:22:29.552 00000240 8a c7 8d ea f4 d5 c3 f8 11 bf 6e 65 a2 25 27 6c ..........ne.%'l 00:22:29.552 00000250 69 69 76 41 31 db ae 3f 58 6a 1e 84 d4 34 f5 95 iivA1..?Xj...4.. 00:22:29.552 00000260 52 c9 04 7a 40 28 c1 17 49 24 a9 ef 43 4f a1 29 R..z@(..I$..CO.) 00:22:29.552 00000270 12 e7 7e 9b a1 da b3 12 f9 c3 83 60 4f 2f 33 a2 ..~........`O/3. 00:22:29.552 00000280 32 79 d7 e3 96 4b 44 84 c1 0e 28 c5 a1 85 b6 bb 2y...KD...(..... 00:22:29.552 00000290 07 66 95 38 37 0c bd da 30 85 da ca dc 0c cc 2d .f.87...0......- 00:22:29.552 000002a0 af 0e f6 9b 8b 05 17 8a b6 5b 4b 4d 96 7e 27 13 .........[KM.~'. 00:22:29.552 000002b0 b3 29 f2 d0 27 a7 46 f6 bf e0 57 92 61 29 20 a5 .)..'.F...W.a) . 00:22:29.552 000002c0 36 7e 39 68 9b 09 48 d3 3f af fa 66 73 30 db 89 6~9h..H.?..fs0.. 00:22:29.552 000002d0 0c 18 82 49 ed c6 8b aa 7d 03 34 9a ad 4c 6d a8 ...I....}.4..Lm. 00:22:29.552 000002e0 32 e1 c0 1d 00 7a 9f 2b 42 91 70 01 fa 02 fe 75 2....z.+B.p....u 00:22:29.552 000002f0 8b 3e ab c1 63 e1 3e af e8 b8 cb a3 16 e6 47 fd .>..c.>.......G. 00:22:29.552 dh secret: 00:22:29.552 00000000 07 84 fc 50 07 10 21 ef cc a5 52 83 d9 63 ae e5 ...P..!...R..c.. 00:22:29.552 00000010 63 29 8d 4d fd b7 20 ab 71 87 0b 0f d9 71 2e fa c).M.. .q....q.. 00:22:29.552 00000020 f6 bc 1a 1b 4e 4f aa ff a8 73 75 9a dc 67 e9 4f ....NO...su..g.O 00:22:29.552 00000030 aa 22 ef 73 d3 c6 0c eb 9c 5d 52 28 ff a0 e6 74 .".s.....]R(...t 00:22:29.552 00000040 51 0d 72 a6 d9 28 eb 3e ca f3 28 fa 28 73 68 3e Q.r..(.>..(.(sh> 00:22:29.552 00000050 fb c1 b7 5e d0 c0 fd 1b 74 2f a9 5c 12 d3 e4 61 ...^....t/.\...a 00:22:29.552 00000060 95 be df 28 9f 71 a8 ae e1 11 11 de a3 c4 d7 74 ...(.q.........t 00:22:29.552 00000070 6f 18 bf 6b e1 32 7b 57 42 10 5f 66 2a b3 6e 39 o..k.2{WB._f*.n9 00:22:29.552 00000080 a6 c0 40 cd b1 6c d1 9c b0 1f 0c 6c 98 cb 42 01 ..@..l.....l..B. 00:22:29.553 00000090 53 c0 2d 0d 2c 62 51 07 f9 f2 13 ce 89 74 0e f6 S.-.,bQ......t.. 00:22:29.553 000000a0 d7 6d b2 76 fd 72 75 75 49 60 c6 2d 43 2e 21 40 .m.v.ruuI`.-C.!@ 00:22:29.553 000000b0 0f 6f a4 e3 fc 71 dc 99 48 e1 bc 4b e5 6a de b5 .o...q..H..K.j.. 00:22:29.553 000000c0 51 7e de a3 24 2d 57 41 fc 4b 94 e1 7a 7a 10 da Q~..$-WA.K..zz.. 00:22:29.553 000000d0 27 61 6d 94 4e 63 ca 3c b2 77 da ad aa a6 95 e6 'am.Nc.<.w...... 00:22:29.553 000000e0 db a5 3f 9a d3 ce 25 5f 8a f6 56 4b 7f 7d be 92 ..?...%_..VK.}.. 00:22:29.553 000000f0 21 8e e9 d5 b3 a5 6f b1 5f 9e 0b 8d b2 21 a5 d3 !.....o._....!.. 00:22:29.553 00000100 10 a3 e6 e2 c6 b3 06 57 61 f3 ea f0 cf 63 17 46 .......Wa....c.F 00:22:29.553 00000110 85 89 40 1c bb 74 9d a9 76 56 6c e5 24 e6 5f 73 ..@..t..vVl.$._s 00:22:29.553 00000120 70 84 ba c8 ca 40 0e e2 eb a0 19 30 4b c5 c5 d5 p....@.....0K... 00:22:29.553 00000130 d8 4d 7d 69 27 08 f0 0a 97 e1 dc 22 a7 12 43 e3 .M}i'......"..C. 00:22:29.553 00000140 39 70 0c d5 69 64 60 26 4b 1d 73 1c fd 3c 23 39 9p..id`&K.s..<#9 00:22:29.553 00000150 b9 ae 45 03 23 aa 7d cc 99 10 0e 58 29 97 eb 1b ..E.#.}....X)... 00:22:29.553 00000160 90 cd 7a 8e e3 ce 98 a2 60 90 72 b9 e9 2b af 12 ..z.....`.r..+.. 00:22:29.553 00000170 f5 35 b0 cd 6c 3d 27 10 bd 76 09 a9 60 b4 7a 9f .5..l='..v..`.z. 00:22:29.553 00000180 8b 31 82 8d d8 fb af 96 4a 39 07 05 7f 2b 80 e5 .1......J9...+.. 00:22:29.553 00000190 cc 06 4b f4 c0 28 72 8d f5 37 40 98 d3 14 ad d3 ..K..(r..7@..... 00:22:29.553 000001a0 92 5d 95 6c a9 0d e0 8a 42 49 d9 28 c7 55 8c 2c .].l....BI.(.U., 00:22:29.553 000001b0 5a b6 de 43 b4 d2 84 91 98 46 57 28 47 e5 e0 7f Z..C.....FW(G... 00:22:29.553 000001c0 6c 03 89 57 e3 61 11 9f f1 6f 75 f2 93 dc 81 7a l..W.a...ou....z 00:22:29.553 000001d0 86 2e 67 92 8b e1 6b 68 01 f5 82 4f c5 d8 04 b1 ..g...kh...O.... 00:22:29.553 000001e0 a0 34 70 36 bb 96 d2 bc 90 35 b2 06 d8 72 ca 2b .4p6.....5...r.+ 00:22:29.553 000001f0 e2 46 3c e9 60 4a b3 1d 62 b2 6a 2f b2 68 5b 0f .F<.`J..b.j/.h[. 00:22:29.553 00000200 5f 6a 8d 42 32 d5 68 0d 9d 3f b0 9e b6 d2 d7 f0 _j.B2.h..?...... 00:22:29.553 00000210 85 e5 a0 d3 5b de 5a 8b 1d 59 11 fe a0 65 ca c3 ....[.Z..Y...e.. 00:22:29.553 00000220 d5 12 f3 dc 8e 67 92 05 19 ea 92 5e 9b e4 f1 ef .....g.....^.... 00:22:29.553 00000230 f8 00 f0 15 c9 a1 ef bf 6d cf 4b d6 51 25 09 13 ........m.K.Q%.. 00:22:29.553 00000240 09 4e e8 e3 88 a5 b3 6b 5a a6 d3 f7 6c 4a 9f 59 .N.....kZ...lJ.Y 00:22:29.553 00000250 10 63 25 fc f5 9e 40 ae 2d ab cb b9 60 5f ea 45 .c%...@.-...`_.E 00:22:29.553 00000260 72 86 58 21 7e 1d 2e 9b d0 24 62 7d af df 4b 6e r.X!~....$b}..Kn 00:22:29.553 00000270 77 6c 99 67 39 9d 8b 92 47 66 92 4b 6a 36 c2 cb wl.g9...Gf.Kj6.. 00:22:29.553 00000280 e1 69 63 02 14 9a 5a d5 09 66 dc 5b 1c e9 e6 59 .ic...Z..f.[...Y 00:22:29.553 00000290 02 97 a2 05 8b cb 10 8f df c8 65 2f 34 ba dd a5 ..........e/4... 00:22:29.553 000002a0 51 91 a3 1c 65 c1 e9 12 ed 76 32 8d 4d b2 af 5a Q...e....v2.M..Z 00:22:29.553 000002b0 f6 0e 17 63 e9 24 7e 52 93 4c 3c a8 34 ef 61 be ...c.$~R.L<.4.a. 00:22:29.553 000002c0 de 70 50 eb 55 af bb 70 90 c3 d2 57 77 59 9a 79 .pP.U..p...WwY.y 00:22:29.553 000002d0 ba bf e1 b8 31 f1 2a db 13 f7 bd d2 4f 9f 94 88 ....1.*.....O... 00:22:29.553 000002e0 0c 6a 3e af 6c 55 cf 14 e1 5f 55 0d 61 d3 d9 24 .j>.lU..._U.a..$ 00:22:29.553 000002f0 b9 96 03 dd f1 b1 77 3b cf 9f 08 7e 76 6b 41 8d ......w;...~vkA. 00:22:29.553 [2024-09-27 13:27:25.465191] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key1, hash=3, dhgroup=4, seq=3775755306, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.553 [2024-09-27 13:27:25.465573] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.553 [2024-09-27 13:27:25.530595] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.553 [2024-09-27 13:27:25.531081] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.553 [2024-09-27 13:27:25.531317] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.553 [2024-09-27 13:27:25.531546] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.553 [2024-09-27 13:27:25.670751] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.553 [2024-09-27 13:27:25.670988] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.553 [2024-09-27 13:27:25.671170] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.553 [2024-09-27 13:27:25.671370] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.553 [2024-09-27 13:27:25.671612] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.553 ctrlr pubkey: 00:22:29.553 00000000 93 ea d8 1b 9c 6e 73 7e 64 d1 9a a2 97 47 83 2b .....ns~d....G.+ 00:22:29.553 00000010 0f de 09 d6 37 a0 07 64 65 4e ee 99 0e 2a 6a 55 ....7..deN...*jU 00:22:29.553 00000020 80 3e 4b be a5 87 31 e7 3d bd 26 b9 22 03 4e b4 .>K...1.=.&.".N. 00:22:29.553 00000030 d6 eb 1a 06 bd 1f 02 09 bd e9 fb e2 b9 4d 55 27 .............MU' 00:22:29.553 00000040 de 32 28 cb b7 31 84 a3 82 50 47 24 8e 90 91 bb .2(..1...PG$.... 00:22:29.553 00000050 6f 7b 7d 41 56 ed 8e fb ad 3c 04 84 2c fd 5c 0f o{}AV....<..,.\. 00:22:29.553 00000060 c1 5a 1f c9 d7 1e d5 9a 86 49 29 60 18 0c e7 ce .Z.......I)`.... 00:22:29.553 00000070 1e 79 53 97 96 95 69 51 33 a2 bc 8c f1 a9 de 03 .yS...iQ3....... 00:22:29.553 00000080 a3 80 ff 22 96 c1 07 f8 74 4e 48 ad 76 54 ec 22 ..."....tNH.vT." 00:22:29.553 00000090 d4 fa 64 7e 1f a7 1b e7 6b 5c ce e4 86 1e 97 aa ..d~....k\...... 00:22:29.553 000000a0 fb dd 77 a9 19 f8 a1 6b 82 52 a3 86 6a 04 22 3b ..w....k.R..j."; 00:22:29.553 000000b0 54 8e 1d 29 d4 51 ea ba e8 f1 7f 87 0e 13 ca 20 T..).Q......... 00:22:29.553 000000c0 34 2c 55 ed ac be 6d c1 18 01 9c 2f a6 e3 0f da 4,U...m..../.... 00:22:29.553 000000d0 d3 a1 d2 ff 52 73 53 ae 2f b8 6d ee 79 ba 87 d9 ....RsS./.m.y... 00:22:29.553 000000e0 1c ec b0 ff b8 cd c0 d3 92 34 38 71 88 4b 21 8e .........48q.K!. 00:22:29.553 000000f0 33 f5 6f eb 70 9f 25 d8 0d 70 52 89 ca 00 a5 58 3.o.p.%..pR....X 00:22:29.553 00000100 4f 4b 4b 4c 65 e4 54 e2 1e 15 84 38 ad 59 ea 18 OKKLe.T....8.Y.. 00:22:29.553 00000110 63 a1 58 d0 73 4b 76 95 07 56 6a c0 a6 74 08 e7 c.X.sKv..Vj..t.. 00:22:29.553 00000120 07 8b 93 f1 4f 75 d8 c3 d6 0a 8b 6d d7 1b e5 38 ....Ou.....m...8 00:22:29.553 00000130 ff c6 e1 cb d0 b0 d2 0e eb f2 ae 15 af 05 09 8c ................ 00:22:29.553 00000140 a3 9f 82 3c 29 2e 30 47 b7 e0 d6 e9 a5 bf 37 d5 ...<).0G......7. 00:22:29.553 00000150 66 2a f5 16 80 07 e4 15 eb bb 5d 33 41 2b c4 5d f*........]3A+.] 00:22:29.553 00000160 ed d4 e3 58 e5 b2 99 c0 1a 31 7b ed be 56 bf 87 ...X.....1{..V.. 00:22:29.553 00000170 d3 95 da 39 7c a9 63 66 1a be cc d1 98 ba d9 16 ...9|.cf........ 00:22:29.553 00000180 eb a2 fb 8b d4 01 d5 97 94 ea 8d 4a 18 68 20 44 ...........J.h D 00:22:29.553 00000190 d0 10 fd 74 49 85 80 d8 1d 5b 4a 6b e1 7c 9e 53 ...tI....[Jk.|.S 00:22:29.553 000001a0 25 64 9f 28 9b af 48 7d 0a 68 51 33 9b 6b c5 66 %d.(..H}.hQ3.k.f 00:22:29.553 000001b0 3c 14 62 6a 6d a6 7e fa 69 42 5a fc 60 ce ae 57 <.bjm.~.iBZ.`..W 00:22:29.553 000001c0 78 25 2e a6 18 03 1e be a5 60 68 84 7a 77 8d b2 x%.......`h.zw.. 00:22:29.553 000001d0 ae 5f a6 aa 76 4f 4e 12 1e ba 09 98 0a 10 97 76 ._..vON........v 00:22:29.553 000001e0 1e 66 87 c9 c1 95 8f 43 a9 5f 81 38 95 67 01 ca .f.....C._.8.g.. 00:22:29.553 000001f0 00 c0 42 58 11 9d aa c9 96 5b 5d 2a f0 d9 90 2d ..BX.....[]*...- 00:22:29.553 00000200 c9 f9 fb 48 3d dc e1 b0 51 40 07 d9 d7 da ef cd ...H=...Q@...... 00:22:29.553 00000210 f5 9f fe 20 52 19 a7 a7 6c a5 80 a7 ed 47 65 32 ... R...l....Ge2 00:22:29.553 00000220 ab b5 0e a4 82 13 2f 2a e3 82 43 9a 04 d5 d9 1d ....../*..C..... 00:22:29.553 00000230 1e 22 31 b6 06 7b e0 15 34 a4 4c 9a b1 3c 61 03 ."1..{..4.L.. 00:22:29.553 00000280 6d f2 cb b2 98 b7 94 6d 30 0f ec c5 d0 3c aa 4e m......m0....<.N 00:22:29.553 00000290 4d ad 11 2b ad 2c d0 95 ab 81 4b 55 5c 4f 5a 1e M..+.,....KU\OZ. 00:22:29.553 000002a0 a2 e0 a6 b0 b5 25 b9 aa 37 ad 34 12 d7 eb 72 1a .....%..7.4...r. 00:22:29.553 000002b0 d4 c4 f4 d7 61 51 e0 12 b7 5e 6f 0c dc 6c b8 97 ....aQ...^o..l.. 00:22:29.553 000002c0 c2 ab a7 e7 c4 b3 7b 14 7c 9e de 58 d9 11 66 b9 ......{.|..X..f. 00:22:29.553 000002d0 f1 d0 a7 bc c1 be 22 2f c5 23 2b a6 d0 87 cf 1c ......"/.#+..... 00:22:29.553 000002e0 50 6a fb 5a ef 82 6c 22 b8 35 9e 60 a9 1c 4f 88 Pj.Z..l".5.`..O. 00:22:29.553 000002f0 28 db 78 6e 5a b7 85 4e 44 ee e2 1f b9 af 58 20 (.xnZ..ND.....X 00:22:29.553 host pubkey: 00:22:29.553 00000000 eb 8c 3a 40 41 86 03 3d 11 58 14 8a 67 6b 3d a2 ..:@A..=.X..gk=. 00:22:29.553 00000010 b2 34 20 4c aa 9f 1d 9c a5 49 02 48 8a d3 03 55 .4 L.....I.H...U 00:22:29.553 00000020 0b 17 0e dd 00 70 c3 ad c0 45 9d 13 7b b2 e9 25 .....p...E..{..% 00:22:29.553 00000030 4c fa b9 3a a6 7a 66 d5 06 83 dd 80 16 8a 0f 1a L..:.zf......... 00:22:29.553 00000040 1d 7c d9 ab a0 6a 39 b0 3d 50 ee 04 40 52 c4 8b .|...j9.=P..@R.. 00:22:29.553 00000050 a6 95 d6 93 a5 48 db ad 22 eb a6 fa c9 46 1a 19 .....H.."....F.. 00:22:29.553 00000060 32 fd a2 8f 27 67 47 6a 25 c9 10 76 c2 5f 97 fb 2...'gGj%..v._.. 00:22:29.553 00000070 e7 24 7c 62 83 7a bb bb e6 b4 d4 88 52 dd d2 81 .$|b.z......R... 00:22:29.553 00000080 0d 91 d3 d0 4e d9 4c cc d7 b9 da e3 06 29 b4 e4 ....N.L......).. 00:22:29.553 00000090 c1 d6 78 a2 ae 47 53 6b 29 30 a6 47 fe 26 31 b7 ..x..GSk)0.G.&1. 00:22:29.553 000000a0 fc ca 92 07 2b 0e 4e 56 cf de 6a b9 b0 b4 7b 3a ....+.NV..j...{: 00:22:29.553 000000b0 f1 ff f7 58 f2 41 21 65 fb 81 13 61 23 e0 aa 39 ...X.A!e...a#..9 00:22:29.553 000000c0 1f 18 b6 0a 36 8c 7f 84 4f cd b9 29 51 17 7c 20 ....6...O..)Q.| 00:22:29.553 000000d0 4c c1 51 60 1c 25 15 d2 06 47 91 e8 cf 0e 70 54 L.Q`.%...G....pT 00:22:29.553 000000e0 e9 b1 32 1d 36 bb 4f 80 ec 52 19 4f cb 3a aa 4b ..2.6.O..R.O.:.K 00:22:29.553 000000f0 4e b3 85 b7 6d a9 d6 90 db 19 ed 2a 20 41 39 54 N...m......* A9T 00:22:29.553 00000100 d5 a0 df 03 c3 69 41 15 d1 4c bf 17 01 99 69 35 .....iA..L....i5 00:22:29.553 00000110 c5 7d 73 61 e7 60 12 e7 b2 18 eb 37 90 48 9c 80 .}sa.`.....7.H.. 00:22:29.553 00000120 d1 f3 17 0f d6 7a 29 54 48 6d 0a c3 d7 3e 7d d0 .....z)THm...>}. 00:22:29.553 00000130 2f 01 ef 32 74 d3 17 2c 56 27 8e e1 13 76 6d 45 /..2t..,V'...vmE 00:22:29.553 00000140 f2 9a 07 0a ed ce ae c0 6f df f6 3e c7 5a 6a 73 ........o..>.Zjs 00:22:29.553 00000150 1d 17 0f d9 a9 bc d5 42 4d 28 8a 3e 1d e7 4d bd .......BM(.>..M. 00:22:29.553 00000160 7b 69 80 6c 78 4e 3f 1a e0 8a 24 44 78 47 c6 3f {i.lxN?...$DxG.? 00:22:29.553 00000170 2b 4c 73 af 13 15 30 32 1f 1d c5 66 4f 0d 33 71 +Ls...02...fO.3q 00:22:29.553 00000180 67 b0 43 25 22 4a 47 44 5d 33 e9 8b 43 ac b8 f5 g.C%"JGD]3..C... 00:22:29.553 00000190 5b bd 7b e3 4e 6f e9 22 42 bc 08 24 51 6b da 3e [.{.No."B..$Qk.> 00:22:29.553 000001a0 78 79 1e 84 25 6d b0 20 84 f0 f6 ca a7 99 86 bc xy..%m. ........ 00:22:29.553 000001b0 67 6e 0d 66 60 82 e7 01 c8 83 ef f3 88 29 69 fe gn.f`........)i. 00:22:29.553 000001c0 25 27 b5 98 37 a9 1d 5d 26 c3 56 8a 1c 83 8a 24 %'..7..]&.V....$ 00:22:29.553 000001d0 f7 a6 9b aa 04 4b da ba 27 89 b8 62 a4 e8 a2 a6 .....K..'..b.... 00:22:29.553 000001e0 e1 d2 30 1c 32 86 ee da 66 8b d6 49 f6 42 73 54 ..0.2...f..I.BsT 00:22:29.553 000001f0 06 af cc b5 6e b9 b5 3c dd 75 77 27 8f a8 6f 72 ....n..<.uw'..or 00:22:29.553 00000200 7f d7 7e 94 1f 6b 3a 7d 6c e1 44 e7 75 14 86 06 ..~..k:}l.D.u... 00:22:29.553 00000210 b3 b7 42 c9 0b ab a7 36 7b d8 6c 99 8b 5b 88 c0 ..B....6{.l..[.. 00:22:29.553 00000220 a9 52 b9 7f aa 37 7d a4 22 25 cc bf a1 f2 a9 2d .R...7}."%.....- 00:22:29.553 00000230 e3 9c 0a 86 2d 1e 9a 13 8c 2f 43 54 1c 58 47 2a ....-..../CT.XG* 00:22:29.553 00000240 c8 99 a9 ba db 2f a1 c3 54 c9 8c 20 3a d7 92 76 ...../..T.. :..v 00:22:29.553 00000250 17 2e bc 72 54 44 f9 4d e2 24 a7 ab 51 4c ca 48 ...rTD.M.$..QL.H 00:22:29.553 00000260 09 2a b9 b7 38 b0 de cd d3 0f 7d bb b6 ac 47 cc .*..8.....}...G. 00:22:29.553 00000270 d1 41 f4 f6 29 76 da 1e 75 fc 7d 3c 1e 51 1d eb .A..)v..u.}<.Q.. 00:22:29.553 00000280 da 61 ac dc 82 57 ed aa 09 09 76 7c d7 e6 c7 d0 .a...W....v|.... 00:22:29.553 00000290 80 33 ed e2 6a a4 65 82 df cd 6b 54 1d 08 a6 89 .3..j.e...kT.... 00:22:29.553 000002a0 39 02 a4 47 b5 ce 71 50 6b ca c7 ef 68 9d 1f 78 9..G..qPk...h..x 00:22:29.553 000002b0 c3 f9 74 89 fe 5f 6b 5b fe e7 e3 39 ed 44 29 85 ..t.._k[...9.D). 00:22:29.553 000002c0 1a 8a 20 ea 61 73 01 3a e7 08 a1 14 aa 6b c1 55 .. .as.:.....k.U 00:22:29.553 000002d0 36 25 54 5a 24 e5 a2 46 85 81 4f c1 28 04 c5 c2 6%TZ$..F..O.(... 00:22:29.553 000002e0 12 51 4d 69 23 f2 99 70 d8 2f 89 e9 55 e0 1d 1c .QMi#..p./..U... 00:22:29.553 000002f0 00 0d 5a a8 5a ea 96 d0 da d4 3c 53 02 46 ad 5c ..Z.Z.....K...1.=.&.".N. 00:22:29.554 00000030 d6 eb 1a 06 bd 1f 02 09 bd e9 fb e2 b9 4d 55 27 .............MU' 00:22:29.554 00000040 de 32 28 cb b7 31 84 a3 82 50 47 24 8e 90 91 bb .2(..1...PG$.... 00:22:29.554 00000050 6f 7b 7d 41 56 ed 8e fb ad 3c 04 84 2c fd 5c 0f o{}AV....<..,.\. 00:22:29.554 00000060 c1 5a 1f c9 d7 1e d5 9a 86 49 29 60 18 0c e7 ce .Z.......I)`.... 00:22:29.554 00000070 1e 79 53 97 96 95 69 51 33 a2 bc 8c f1 a9 de 03 .yS...iQ3....... 00:22:29.554 00000080 a3 80 ff 22 96 c1 07 f8 74 4e 48 ad 76 54 ec 22 ..."....tNH.vT." 00:22:29.554 00000090 d4 fa 64 7e 1f a7 1b e7 6b 5c ce e4 86 1e 97 aa ..d~....k\...... 00:22:29.554 000000a0 fb dd 77 a9 19 f8 a1 6b 82 52 a3 86 6a 04 22 3b ..w....k.R..j."; 00:22:29.554 000000b0 54 8e 1d 29 d4 51 ea ba e8 f1 7f 87 0e 13 ca 20 T..).Q......... 00:22:29.554 000000c0 34 2c 55 ed ac be 6d c1 18 01 9c 2f a6 e3 0f da 4,U...m..../.... 00:22:29.554 000000d0 d3 a1 d2 ff 52 73 53 ae 2f b8 6d ee 79 ba 87 d9 ....RsS./.m.y... 00:22:29.554 000000e0 1c ec b0 ff b8 cd c0 d3 92 34 38 71 88 4b 21 8e .........48q.K!. 00:22:29.554 000000f0 33 f5 6f eb 70 9f 25 d8 0d 70 52 89 ca 00 a5 58 3.o.p.%..pR....X 00:22:29.554 00000100 4f 4b 4b 4c 65 e4 54 e2 1e 15 84 38 ad 59 ea 18 OKKLe.T....8.Y.. 00:22:29.554 00000110 63 a1 58 d0 73 4b 76 95 07 56 6a c0 a6 74 08 e7 c.X.sKv..Vj..t.. 00:22:29.554 00000120 07 8b 93 f1 4f 75 d8 c3 d6 0a 8b 6d d7 1b e5 38 ....Ou.....m...8 00:22:29.554 00000130 ff c6 e1 cb d0 b0 d2 0e eb f2 ae 15 af 05 09 8c ................ 00:22:29.554 00000140 a3 9f 82 3c 29 2e 30 47 b7 e0 d6 e9 a5 bf 37 d5 ...<).0G......7. 00:22:29.554 00000150 66 2a f5 16 80 07 e4 15 eb bb 5d 33 41 2b c4 5d f*........]3A+.] 00:22:29.554 00000160 ed d4 e3 58 e5 b2 99 c0 1a 31 7b ed be 56 bf 87 ...X.....1{..V.. 00:22:29.554 00000170 d3 95 da 39 7c a9 63 66 1a be cc d1 98 ba d9 16 ...9|.cf........ 00:22:29.554 00000180 eb a2 fb 8b d4 01 d5 97 94 ea 8d 4a 18 68 20 44 ...........J.h D 00:22:29.554 00000190 d0 10 fd 74 49 85 80 d8 1d 5b 4a 6b e1 7c 9e 53 ...tI....[Jk.|.S 00:22:29.554 000001a0 25 64 9f 28 9b af 48 7d 0a 68 51 33 9b 6b c5 66 %d.(..H}.hQ3.k.f 00:22:29.554 000001b0 3c 14 62 6a 6d a6 7e fa 69 42 5a fc 60 ce ae 57 <.bjm.~.iBZ.`..W 00:22:29.554 000001c0 78 25 2e a6 18 03 1e be a5 60 68 84 7a 77 8d b2 x%.......`h.zw.. 00:22:29.554 000001d0 ae 5f a6 aa 76 4f 4e 12 1e ba 09 98 0a 10 97 76 ._..vON........v 00:22:29.554 000001e0 1e 66 87 c9 c1 95 8f 43 a9 5f 81 38 95 67 01 ca .f.....C._.8.g.. 00:22:29.554 000001f0 00 c0 42 58 11 9d aa c9 96 5b 5d 2a f0 d9 90 2d ..BX.....[]*...- 00:22:29.554 00000200 c9 f9 fb 48 3d dc e1 b0 51 40 07 d9 d7 da ef cd ...H=...Q@...... 00:22:29.554 00000210 f5 9f fe 20 52 19 a7 a7 6c a5 80 a7 ed 47 65 32 ... R...l....Ge2 00:22:29.554 00000220 ab b5 0e a4 82 13 2f 2a e3 82 43 9a 04 d5 d9 1d ....../*..C..... 00:22:29.554 00000230 1e 22 31 b6 06 7b e0 15 34 a4 4c 9a b1 3c 61 03 ."1..{..4.L.. 00:22:29.554 00000280 6d f2 cb b2 98 b7 94 6d 30 0f ec c5 d0 3c aa 4e m......m0....<.N 00:22:29.554 00000290 4d ad 11 2b ad 2c d0 95 ab 81 4b 55 5c 4f 5a 1e M..+.,....KU\OZ. 00:22:29.554 000002a0 a2 e0 a6 b0 b5 25 b9 aa 37 ad 34 12 d7 eb 72 1a .....%..7.4...r. 00:22:29.554 000002b0 d4 c4 f4 d7 61 51 e0 12 b7 5e 6f 0c dc 6c b8 97 ....aQ...^o..l.. 00:22:29.554 000002c0 c2 ab a7 e7 c4 b3 7b 14 7c 9e de 58 d9 11 66 b9 ......{.|..X..f. 00:22:29.554 000002d0 f1 d0 a7 bc c1 be 22 2f c5 23 2b a6 d0 87 cf 1c ......"/.#+..... 00:22:29.554 000002e0 50 6a fb 5a ef 82 6c 22 b8 35 9e 60 a9 1c 4f 88 Pj.Z..l".5.`..O. 00:22:29.554 000002f0 28 db 78 6e 5a b7 85 4e 44 ee e2 1f b9 af 58 20 (.xnZ..ND.....X 00:22:29.554 host pubkey: 00:22:29.554 00000000 35 8a e1 23 16 e9 49 da 91 fa 0a a6 33 ee 4d 9b 5..#..I.....3.M. 00:22:29.554 00000010 e5 4a 1c 17 f6 c7 a2 9e d3 dd 8e 64 62 4a 7a 50 .J.........dbJzP 00:22:29.554 00000020 ce a9 ff ab c1 55 ff a1 72 b1 38 57 fb 92 ca a1 .....U..r.8W.... 00:22:29.554 00000030 83 c7 8c 51 f7 1d 88 7a c7 c2 83 58 f4 e3 d6 48 ...Q...z...X...H 00:22:29.554 00000040 34 d6 e9 e9 88 6b fd 85 48 e4 5c a8 f0 5a a1 a1 4....k..H.\..Z.. 00:22:29.554 00000050 bd dc 8e 30 eb 07 3f 0b f9 96 c9 be c5 f8 8b 9d ...0..?......... 00:22:29.554 00000060 89 ef 7b f0 74 f4 2a 0a 22 c3 52 41 bc ce d7 75 ..{.t.*.".RA...u 00:22:29.554 00000070 20 79 a0 bc 29 52 93 9f 7b 39 7c b5 e3 d7 73 cb y..)R..{9|...s. 00:22:29.554 00000080 a3 b8 b6 a2 58 7d 3a 0e 16 91 75 ff 30 22 9b 9a ....X}:...u.0".. 00:22:29.554 00000090 ed cf 02 a6 95 08 42 1e 57 a7 6b 19 0f dc f9 6e ......B.W.k....n 00:22:29.554 000000a0 fc bd 56 8e 55 42 09 d8 7c 6b 46 6e 3c 88 9b 4f ..V.UB..|kFn<..O 00:22:29.554 000000b0 7c 94 86 57 7b 5c 8b ca a8 90 f4 7d 25 da d3 73 |..W{\.....}%..s 00:22:29.554 000000c0 b1 54 04 dc 0f 24 8f cd 93 8f 9a 89 ac 1e 20 b2 .T...$........ . 00:22:29.554 000000d0 d3 9d 63 1a 74 df 14 70 3f 9d ba 53 93 91 fb 27 ..c.t..p?..S...' 00:22:29.554 000000e0 42 14 58 2b 84 a4 28 b2 a8 8d b2 3a b2 44 62 1c B.X+..(....:.Db. 00:22:29.554 000000f0 d9 03 bd 32 fe bb c4 ca b6 57 ce 89 75 ff cb 1f ...2.....W..u... 00:22:29.554 00000100 e7 9d 22 41 c4 b0 e7 62 8d ff ab 76 7e 44 f3 5f .."A...b...v~D._ 00:22:29.554 00000110 31 d9 70 78 78 65 3f bc 1e 35 cd b6 3e 57 8f e0 1.pxxe?..5..>W.. 00:22:29.554 00000120 ec 25 22 10 0c 3c 53 0a ba 7d a1 26 3c b8 ba 25 .%"..0.d^J.C....3. 00:22:29.555 000000a0 80 b2 ea a6 2e 0d 04 83 c1 23 5e 4c c0 da 01 a3 .........#^L.... 00:22:29.555 000000b0 c1 a5 ee c5 fb 1d a7 d0 01 ab 5b fc 4a 42 20 77 ..........[.JB w 00:22:29.555 000000c0 06 cb 3a 90 95 6c ce f3 5b 04 33 46 f0 a3 90 6a ..:..l..[.3F...j 00:22:29.555 000000d0 78 5e 62 0e be da 71 dc 18 af 3b a0 a6 ec 2c dc x^b...q...;...,. 00:22:29.555 000000e0 47 38 20 34 a5 2a 20 16 7c bb 21 7a 48 c5 43 4e G8 4.* .|.!zH.CN 00:22:29.555 000000f0 bd 7d 94 2d 91 1e 4e 22 84 c7 2b 19 42 6a 4c 31 .}.-..N"..+.BjL1 00:22:29.555 00000100 a6 c0 5c 99 9f 2d d3 f4 bb db 92 7a 86 49 d8 d8 ..\..-.....z.I.. 00:22:29.555 00000110 f3 41 93 e8 34 0f 75 5d 9e 95 3d eb 8b 59 1e 1c .A..4.u]..=..Y.. 00:22:29.555 00000120 fb b4 6d be cf bb be f9 55 53 d2 28 f4 d2 ab bc ..m.....US.(.... 00:22:29.555 00000130 66 bb b4 9a de 5a 88 6e df 02 86 f5 63 81 2a 4b f....Z.n....c.*K 00:22:29.555 00000140 23 91 23 39 2a 01 d1 3e 36 ed 07 f3 b9 52 13 c2 #.#9*..>6....R.. 00:22:29.555 00000150 0a 40 09 61 41 c5 40 a8 1b 97 2e 65 14 07 2a 96 .@.aA.@....e..*. 00:22:29.555 00000160 08 62 ef 44 97 d4 67 4c df d3 ab ff 56 2f 03 63 .b.D..gL....V/.c 00:22:29.555 00000170 2e 73 3e f0 36 d6 18 d8 b3 35 ee fb 07 28 12 52 .s>.6....5...(.R 00:22:29.555 00000180 6e f4 d0 c8 7d 75 32 1b 39 f3 db 06 cf 05 23 94 n...}u2.9.....#. 00:22:29.555 00000190 fa b2 aa e6 d5 dc ea 5e de c2 d8 d7 6f e2 8e d1 .......^....o... 00:22:29.555 000001a0 be 95 ee bf b1 ef 00 e0 b4 90 07 e3 52 c8 29 2d ............R.)- 00:22:29.555 000001b0 ff 50 3b 92 f2 84 91 3a 2b a3 aa 3a 54 93 81 6b .P;....:+..:T..k 00:22:29.555 000001c0 ed f0 5d 1c ce fa 7c 4f dc c2 34 01 ce 57 ae 09 ..]...|O..4..W.. 00:22:29.555 000001d0 99 dc bf 1d 6f 06 f8 74 79 5d 97 04 01 6c 3c 01 ....o..ty]...l<. 00:22:29.555 000001e0 4e bd cd e9 c1 27 b4 65 2e de 45 ae 41 d7 fa 4f N....'.e..E.A..O 00:22:29.555 000001f0 42 be 9a 2b 94 16 94 ee 66 77 ed d8 51 b0 84 77 B..+....fw..Q..w 00:22:29.555 00000200 8d 3d 30 b3 a6 fc 5a c9 a3 81 ab 74 de 77 90 28 .=0...Z....t.w.( 00:22:29.555 00000210 85 08 c4 d4 ea 35 72 80 fb 1b 0c cc c5 f4 02 2d .....5r........- 00:22:29.555 00000220 bc a2 e7 e5 7c c8 f5 a7 a2 7d 7f cf 57 1a 80 77 ....|....}..W..w 00:22:29.555 00000230 18 f6 21 37 6a 77 92 f3 1b 1e 8e 05 48 d3 d6 8a ..!7jw......H... 00:22:29.555 00000240 86 15 8c 9d 8c 72 41 39 21 b3 d9 4e 22 50 fa 44 .....rA9!..N"P.D 00:22:29.555 00000250 6b f0 5d 6e 61 a5 0c 4a d9 6d 3b 6d 85 e8 9d 80 k.]na..J.m;m.... 00:22:29.555 00000260 45 49 cd 65 9c 97 7d 2b e8 e8 76 9a 1e a1 71 f2 EI.e..}+..v...q. 00:22:29.555 00000270 7e a5 6d ff a8 fb 6c 10 87 b4 5b 37 bf 55 3e a2 ~.m...l...[7.U>. 00:22:29.555 00000280 0b e4 0d 0b b5 08 ec 78 07 bc 54 00 1c 21 d2 cb .......x..T..!.. 00:22:29.555 00000290 5d c4 69 76 fc d0 dc d9 4c c9 99 e8 98 34 8d 61 ].iv....L....4.a 00:22:29.555 000002a0 ce a4 ad 86 85 c0 40 fc ff c3 86 e2 c6 2b dd b4 ......@......+.. 00:22:29.555 000002b0 73 13 3b c7 16 e5 29 17 45 c6 43 35 41 2d 15 30 s.;...).E.C5A-.0 00:22:29.555 000002c0 50 91 47 29 1e de a8 8f 22 42 13 60 de 17 a7 c3 P.G)...."B.`.... 00:22:29.555 000002d0 b2 be 7b 06 bc e6 68 52 70 f5 7b 41 14 95 9c 60 ..{...hRp.{A...` 00:22:29.555 000002e0 3e fa 34 1b 24 cd d7 50 9b ae bd bc 55 3a a9 11 >.4.$..P....U:.. 00:22:29.555 000002f0 a3 78 06 f2 20 f5 d8 c6 dc 5b 1d 10 00 73 16 eb .x.. ....[...s.. 00:22:29.555 [2024-09-27 13:27:25.918249] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key2, hash=3, dhgroup=4, seq=3775755308, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.555 [2024-09-27 13:27:25.918591] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.555 [2024-09-27 13:27:25.973227] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.555 [2024-09-27 13:27:25.973659] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.555 [2024-09-27 13:27:25.973895] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.555 [2024-09-27 13:27:25.974145] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.555 [2024-09-27 13:27:26.103893] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.555 [2024-09-27 13:27:26.104145] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.555 [2024-09-27 13:27:26.104320] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.555 [2024-09-27 13:27:26.104418] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.555 [2024-09-27 13:27:26.104651] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.555 ctrlr pubkey: 00:22:29.555 00000000 e6 ff 43 9b 57 58 81 d8 e4 93 6f 32 21 6e df 64 ..C.WX....o2!n.d 00:22:29.555 00000010 01 b2 f6 c9 6e 80 ef 26 96 0b 8e 9e 28 25 f3 a4 ....n..&....(%.. 00:22:29.555 00000020 91 01 1e 38 03 35 c6 25 48 06 39 c2 4f a1 f4 a3 ...8.5.%H.9.O... 00:22:29.555 00000030 b9 d7 a9 3d e5 d2 a1 2d f0 f2 85 4a 9a 6b 73 7d ...=...-...J.ks} 00:22:29.555 00000040 d6 6e 0e bc e3 70 21 d5 23 73 56 fa 82 7a bd e4 .n...p!.#sV..z.. 00:22:29.555 00000050 a6 9e c7 04 cf be 29 4d ec 89 8b 75 3e e5 f0 6e ......)M...u>..n 00:22:29.555 00000060 80 80 71 25 0e 11 8f 44 0e f5 2c 1b e4 d0 c5 75 ..q%...D..,....u 00:22:29.555 00000070 62 7f 77 b8 69 4b 4c af ff c2 41 2a 88 df c8 40 b.w.iKL...A*...@ 00:22:29.555 00000080 30 72 d6 f1 4d 9a fa f8 85 d2 4b 6c 53 ff a5 b0 0r..M.....KlS... 00:22:29.555 00000090 22 cd 47 f8 90 b2 56 5c de 6b 14 bf b9 71 2b a8 ".G...V\.k...q+. 00:22:29.555 000000a0 0e f8 19 6c 66 cb ff e6 4e 4b 4b 28 d5 7e e1 8b ...lf...NKK(.~.. 00:22:29.555 000000b0 38 a7 43 09 cc 9c e8 ca 99 e7 c0 f1 7c d4 81 91 8.C.........|... 00:22:29.555 000000c0 fd fb 83 31 4b c0 81 b2 25 09 08 dd da 12 e2 3f ...1K...%......? 00:22:29.555 000000d0 3b 4c 6d c8 e5 63 4a 5c d1 68 c5 3e 46 b5 de b1 ;Lm..cJ\.h.>F... 00:22:29.555 000000e0 38 bc 82 54 e0 69 81 87 a3 06 b6 8e c2 c5 52 96 8..T.i........R. 00:22:29.555 000000f0 08 b1 90 21 28 ad 92 8a bf 1f 33 50 dc ad b4 8e ...!(.....3P.... 00:22:29.555 00000100 97 f8 ac 1d fc e9 75 c7 23 1b 19 08 11 d9 02 ef ......u.#....... 00:22:29.555 00000110 65 c4 05 4f 43 d2 a1 d6 b2 3b aa 0a 46 94 bb 82 e..OC....;..F... 00:22:29.555 00000120 29 28 f2 bf e5 e0 79 56 08 cb 1e 25 b7 36 fb f5 )(....yV...%.6.. 00:22:29.555 00000130 8e a4 bd f8 12 0e 34 ff cd 3a 55 b0 7a e0 8f 1b ......4..:U.z... 00:22:29.555 00000140 cd 3e b9 c2 56 b6 28 94 cb 6f a2 d9 70 cd 0c 5a .>..V.(..o..p..Z 00:22:29.555 00000150 c4 e3 2d fd 2a 8d ed 15 89 08 06 01 70 dc be 47 ..-.*.......p..G 00:22:29.555 00000160 6e 74 b0 a3 19 ea 84 eb bf 17 9b c8 ca c7 67 9b nt............g. 00:22:29.555 00000170 37 63 33 19 c2 62 2a 9d 45 b6 0a 56 f4 29 fc bc 7c3..b*.E..V.).. 00:22:29.555 00000180 3a 41 c0 92 5a d1 be 97 10 1d be 38 49 31 40 ce :A..Z......8I1@. 00:22:29.555 00000190 92 fd 53 ca e3 9f b3 27 8f f7 bc d2 9a 34 ab 71 ..S....'.....4.q 00:22:29.555 000001a0 2e 06 24 c9 b5 69 2d dd a1 c1 06 d3 0a 6b 8a c6 ..$..i-......k.. 00:22:29.555 000001b0 f3 7f 5a a3 02 90 9a c1 37 5d 05 11 4b 6a 23 24 ..Z.....7]..Kj#$ 00:22:29.555 000001c0 92 d2 bc cf d4 f3 96 14 50 6c e5 2a 23 51 03 da ........Pl.*#Q.. 00:22:29.555 000001d0 82 2a 22 b6 fb 99 0a ff fc 1b c4 5a 03 d3 bc ac .*"........Z.... 00:22:29.555 000001e0 f2 bf 02 8f cc 3a 9b 56 41 15 50 ff de 45 e3 d3 .....:.VA.P..E.. 00:22:29.555 000001f0 71 08 c6 a4 b5 75 b1 32 ce 79 60 bd 1f 50 83 94 q....u.2.y`..P.. 00:22:29.555 00000200 92 29 24 40 2c c2 fc 87 c7 55 6a da 1c 59 7e a7 .)$@,....Uj..Y~. 00:22:29.555 00000210 8f 0c fc 8d 22 07 c4 ad 10 43 91 47 99 d4 4c 57 ...."....C.G..LW 00:22:29.555 00000220 0c b2 1e 72 2b e1 b1 d3 7f 9d a7 9e 9e 6a ae a4 ...r+........j.. 00:22:29.555 00000230 69 2b 37 1a 44 82 c9 53 e1 96 73 b0 7e 44 54 18 i+7.D..S..s.~DT. 00:22:29.555 00000240 8b b6 5e 67 4d 60 24 f5 d9 55 81 87 e2 3e 60 22 ..^gM`$..U...>`" 00:22:29.555 00000250 33 01 c6 70 22 3b de 0c d5 a8 db 9f cf 51 18 37 3..p";.......Q.7 00:22:29.555 00000260 01 b7 f9 59 92 47 37 e6 e4 13 0d 27 be 07 26 84 ...Y.G7....'..&. 00:22:29.555 00000270 97 85 f2 6d a4 90 88 72 21 4e 02 10 51 37 20 af ...m...r!N..Q7 . 00:22:29.555 00000280 bf 65 4b 5c ea b1 ff 13 36 b8 34 8d bd 29 6a df .eK\....6.4..)j. 00:22:29.555 00000290 51 6c f9 13 bf 87 49 cc 95 58 92 b8 0a 90 b0 bf Ql....I..X...... 00:22:29.555 000002a0 49 95 5a 71 93 a4 93 b8 5b 3d 77 7e 36 a7 d1 0a I.Zq....[=w~6... 00:22:29.555 000002b0 a2 54 27 73 66 0a 8a 23 ff 82 f7 3f c1 b7 61 e2 .T'sf..#...?..a. 00:22:29.555 000002c0 f2 d1 3b ba a1 db ca 98 12 66 a2 70 06 da 9f e2 ..;......f.p.... 00:22:29.555 000002d0 56 8f 50 ab 4b 27 6c aa 1a 1d bc a1 c3 57 59 66 V.P.K'l......WYf 00:22:29.555 000002e0 78 95 47 3f 51 c4 ff d5 a0 a0 4e 40 76 dc c6 82 x.G?Q.....N@v... 00:22:29.555 000002f0 a2 31 56 bd 48 7c 4b a2 c4 87 1b 99 bc 99 db 59 .1V.H|K........Y 00:22:29.555 host pubkey: 00:22:29.555 00000000 fd 6a 12 c2 65 d3 37 11 ca be dd d2 b8 be 43 c2 .j..e.7.......C. 00:22:29.555 00000010 96 bd 72 ff dc de 1d 30 be 9a 40 6e aa 71 95 c7 ..r....0..@n.q.. 00:22:29.555 00000020 db ae e5 ca 3a c8 eb 85 84 83 40 1c 63 ad eb ab ....:.....@.c... 00:22:29.555 00000030 40 0b d0 15 48 28 64 c3 4d 68 c0 1c 12 0b 96 d9 @...H(d.Mh...... 00:22:29.555 00000040 ce 54 84 fb 5c 57 4e ca 7d 33 24 5d 3d 40 ce 62 .T..\WN.}3$]=@.b 00:22:29.555 00000050 50 60 5a 6b 70 ae 6f 64 de 5d 31 a8 ae f8 24 ec P`Zkp.od.]1...$. 00:22:29.555 00000060 6f dc 18 52 51 81 fa 63 cd e3 05 d2 a2 c9 16 95 o..RQ..c........ 00:22:29.555 00000070 1f d7 6c b0 88 57 9f 77 0e a6 3a 63 d6 58 33 aa ..l..W.w..:c.X3. 00:22:29.555 00000080 bf 17 b4 d5 fb 4e f3 dd aa 1a 68 78 05 a7 56 b6 .....N....hx..V. 00:22:29.555 00000090 f4 7d f3 f1 56 5f 41 46 1a 4f 41 f5 f3 f4 7e 2e .}..V_AF.OA...~. 00:22:29.555 000000a0 37 f2 5f 05 be 2f 52 b4 7f aa 9d de 69 34 7a 8f 7._../R.....i4z. 00:22:29.555 000000b0 85 1e 3c 95 73 1e 03 fd 53 9c 15 18 29 ca 3e de ..<.s...S...).>. 00:22:29.555 000000c0 1c 29 db fb 7a fe 70 c2 f3 87 44 92 f8 8a 4a db .)..z.p...D...J. 00:22:29.555 000000d0 46 32 e1 5f f4 04 47 f0 63 68 81 77 46 d4 b3 90 F2._..G.ch.wF... 00:22:29.555 000000e0 39 97 c5 bd a5 f3 ca 16 0b cc de 85 f1 da 5e a9 9.............^. 00:22:29.555 000000f0 e9 7d a9 51 7d a1 64 a1 c6 4d 3e 51 d4 b7 ef cb .}.Q}.d..M>Q.... 00:22:29.555 00000100 2b bb 65 b5 ef 41 c6 57 bc 1d 9b ba 60 08 9b 57 +.e..A.W....`..W 00:22:29.555 00000110 fe 91 5f 83 75 e3 dd 80 d0 b6 b7 17 85 65 4f ca .._.u........eO. 00:22:29.555 00000120 45 48 21 eb 32 64 8c 9b 09 ce 71 b8 a7 c0 3f 46 EH!.2d....q...?F 00:22:29.555 00000130 45 27 4f e4 ea 70 c0 21 a4 a8 02 02 ca b2 f4 68 E'O..p.!.......h 00:22:29.555 00000140 28 47 55 af a2 71 a1 86 d4 7e 16 4d 74 49 17 2b (GU..q...~.MtI.+ 00:22:29.555 00000150 8f fa 9d ae 2e 92 2f 63 f3 5d f6 52 0b e8 c3 c9 ....../c.].R.... 00:22:29.555 00000160 e0 a8 36 3c 12 a3 f7 4f 67 b0 66 c1 0b ad 90 ac ..6<...Og.f..... 00:22:29.555 00000170 82 9d a8 e9 dc 54 89 76 58 78 47 2e 67 c2 7b d5 .....T.vXxG.g.{. 00:22:29.555 00000180 1f fa cb 39 48 43 16 cd d7 b2 db 81 96 b4 de 8a ...9HC.......... 00:22:29.555 00000190 bb 35 e6 1a 6a be 44 18 bf 41 54 2d 46 ba 52 a2 .5..j.D..AT-F.R. 00:22:29.555 000001a0 38 7c 18 1a f0 ef 80 0d 06 e8 d0 73 ac b9 48 98 8|.........s..H. 00:22:29.555 000001b0 9b e9 b3 67 34 8e 8d 3b cd 9b ea 19 89 fe 39 50 ...g4..;......9P 00:22:29.555 000001c0 fa dd 37 fe ed 5b d1 12 49 a3 fc d9 2e c8 7e e0 ..7..[..I.....~. 00:22:29.555 000001d0 5a bc 18 40 01 6b 5a e8 6d 30 64 26 75 41 f2 8d Z..@.kZ.m0d&uA.. 00:22:29.555 000001e0 7f 58 44 10 6c b4 ab 76 65 1d 57 61 de fb a2 18 .XD.l..ve.Wa.... 00:22:29.555 000001f0 19 0a c9 8e 54 e8 c8 5f 0a cb 34 60 8f e7 6f 0d ....T.._..4`..o. 00:22:29.555 00000200 79 1d b3 55 06 d4 a6 4c 3c d6 b3 ca 27 62 17 b8 y..U...L<...'b.. 00:22:29.555 00000210 68 6d a2 ff e5 9b c3 a9 32 0f c1 20 52 d2 a5 d4 hm......2.. R... 00:22:29.555 00000220 17 68 8b f9 b5 6a ce d5 83 14 18 58 33 22 47 11 .h...j.....X3"G. 00:22:29.555 00000230 67 73 0b cc fd 83 56 cd df 3a 77 17 70 5a ca eb gs....V..:w.pZ.. 00:22:29.555 00000240 2a ac 3c 8e f8 10 ea 87 3a b4 69 a1 6d 5a c6 f0 *.<.....:.i.mZ.. 00:22:29.555 00000250 d6 c0 30 bb e6 83 99 fb 97 97 41 62 84 92 6f e7 ..0.......Ab..o. 00:22:29.555 00000260 a9 5e dc cb 5e 3d d2 d3 de d7 56 49 18 07 b7 9e .^..^=....VI.... 00:22:29.555 00000270 38 52 98 b5 77 04 7d b7 e1 5a 2b 42 72 9a a6 f4 8R..w.}..Z+Br... 00:22:29.555 00000280 a7 7f a5 c1 e4 f6 fe 60 64 da b7 2b 2f 22 f6 7f .......`d..+/".. 00:22:29.555 00000290 4b d4 a9 c5 e7 34 3e 87 04 ee 2f dd e6 94 c0 c5 K....4>.../..... 00:22:29.555 000002a0 ed 61 7f b0 f1 b0 37 31 08 14 22 08 4a 0a 3a b2 .a....71..".J.:. 00:22:29.555 000002b0 7b 89 98 3b 42 da 0e 88 ff 0b 13 7b c3 25 b6 cb {..;B......{.%.. 00:22:29.555 000002c0 60 ac 6a 9e ca 1a 32 cd cd 4c e3 33 a0 3c 96 8f `.j...2..L.3.<.. 00:22:29.555 000002d0 92 3d ff 74 91 2f f3 e8 90 19 5f ed de 7c e4 a0 .=.t./...._..|.. 00:22:29.555 000002e0 56 da 09 24 3b 6d 55 ce 45 a7 1a 30 54 9a 73 cb V..$;mU.E..0T.s. 00:22:29.555 000002f0 3a 70 7a ce ec 97 25 f8 3a 9f 95 94 9a 2a 95 09 :pz...%.:....*.. 00:22:29.555 dh secret: 00:22:29.555 00000000 3b da 72 6a 11 26 76 1c e9 58 e5 2b 6c 4c b4 1d ;.rj.&v..X.+lL.. 00:22:29.555 00000010 55 15 55 3f 61 dc e6 a6 ad a7 cb 13 80 03 13 78 U.U?a..........x 00:22:29.555 00000020 e2 82 ae aa 53 02 92 d4 cf 75 5b bd f5 cd be 99 ....S....u[..... 00:22:29.555 00000030 44 e7 eb 22 5d 3b 26 29 4b 6c 22 0a fb 94 e6 76 D.."];&)Kl"....v 00:22:29.555 00000040 58 d1 16 04 09 c4 20 c1 66 bc d9 fb 16 6f b8 b3 X..... .f....o.. 00:22:29.555 00000050 6e c0 23 9f 1e 84 c5 f3 74 e1 d6 c3 bc 6b 65 ed n.#.....t....ke. 00:22:29.555 00000060 d3 3c 3c 19 5f 58 6e f4 f1 42 29 08 2d b4 1c cb .<<._Xn..B).-... 00:22:29.555 00000070 d0 1e 09 3e 85 27 c2 81 12 78 1d 74 f8 b9 92 6f ...>.'...x.t...o 00:22:29.555 00000080 63 ab eb 70 77 ee da 2d 3a 35 2e bf 1c 49 c3 1d c..pw..-:5...I.. 00:22:29.555 00000090 61 09 be fc b7 53 d9 16 12 a6 b1 16 97 8d bb 74 a....S.........t 00:22:29.555 000000a0 e6 88 a5 77 ac 40 98 08 a5 4d 2f cc d3 2d 58 90 ...w.@...M/..-X. 00:22:29.555 000000b0 db d7 cf b2 7d 62 7d ca 23 b2 91 cc 56 61 dd 51 ....}b}.#...Va.Q 00:22:29.555 000000c0 76 97 ee 2b 2c 77 32 95 52 80 f1 96 0e ee 22 2d v..+,w2.R....."- 00:22:29.555 000000d0 33 36 c1 65 18 72 13 0f f1 6f d0 0f b9 f3 e4 e6 36.e.r...o...... 00:22:29.555 000000e0 98 8f 5c 69 f6 9a e9 49 32 99 6d f1 34 01 5a 2e ..\i...I2.m.4.Z. 00:22:29.555 000000f0 26 ad 32 50 25 36 d3 27 f4 7a 7d 92 a8 5a 74 73 &.2P%6.'.z}..Zts 00:22:29.556 00000100 8e 6c 09 e0 bb 4b d9 c2 ab 98 93 89 b1 bc 2a 74 .l...K........*t 00:22:29.556 00000110 b9 d2 9a 96 c8 95 82 6c 84 1c fa 18 40 44 28 fd .......l....@D(. 00:22:29.556 00000120 f5 9e cd bf a5 94 62 2f 79 8f 74 ee d1 ed 92 66 ......b/y.t....f 00:22:29.556 00000130 c4 6e 5e 72 5b bb 16 75 83 b9 b2 ee 3c 54 1f 04 .n^r[..u........V....k...? 00:22:29.556 00000220 c9 6b d4 e7 b1 e6 1d 1b 2c 45 7b e4 1a 1e 1f f9 .k......,E{..... 00:22:29.556 00000230 cf 64 f7 e2 16 3f 7d bc fe 0d ed 9c d8 9c aa 68 .d...?}........h 00:22:29.556 00000240 00 e5 a1 49 64 40 77 ab 83 10 1c 12 77 68 58 47 ...Id@w.....whXG 00:22:29.556 00000250 a3 f7 87 5c 0d fc f7 b6 4d 26 02 2f 96 5f 62 33 ...\....M&./._b3 00:22:29.556 00000260 59 e2 39 96 10 8d a1 86 a4 87 0f 45 18 90 34 75 Y.9........E..4u 00:22:29.556 00000270 5f 0c ce 40 a7 c6 56 dd b5 7b eb cb 54 3c 19 a0 _..@..V..{..T<.. 00:22:29.556 00000280 5d 4f 06 5c 06 a6 f6 a9 3c 68 69 fe 3d 1d d8 8d ]O.\......n 00:22:29.556 00000060 80 80 71 25 0e 11 8f 44 0e f5 2c 1b e4 d0 c5 75 ..q%...D..,....u 00:22:29.556 00000070 62 7f 77 b8 69 4b 4c af ff c2 41 2a 88 df c8 40 b.w.iKL...A*...@ 00:22:29.556 00000080 30 72 d6 f1 4d 9a fa f8 85 d2 4b 6c 53 ff a5 b0 0r..M.....KlS... 00:22:29.556 00000090 22 cd 47 f8 90 b2 56 5c de 6b 14 bf b9 71 2b a8 ".G...V\.k...q+. 00:22:29.556 000000a0 0e f8 19 6c 66 cb ff e6 4e 4b 4b 28 d5 7e e1 8b ...lf...NKK(.~.. 00:22:29.556 000000b0 38 a7 43 09 cc 9c e8 ca 99 e7 c0 f1 7c d4 81 91 8.C.........|... 00:22:29.556 000000c0 fd fb 83 31 4b c0 81 b2 25 09 08 dd da 12 e2 3f ...1K...%......? 00:22:29.556 000000d0 3b 4c 6d c8 e5 63 4a 5c d1 68 c5 3e 46 b5 de b1 ;Lm..cJ\.h.>F... 00:22:29.556 000000e0 38 bc 82 54 e0 69 81 87 a3 06 b6 8e c2 c5 52 96 8..T.i........R. 00:22:29.556 000000f0 08 b1 90 21 28 ad 92 8a bf 1f 33 50 dc ad b4 8e ...!(.....3P.... 00:22:29.556 00000100 97 f8 ac 1d fc e9 75 c7 23 1b 19 08 11 d9 02 ef ......u.#....... 00:22:29.556 00000110 65 c4 05 4f 43 d2 a1 d6 b2 3b aa 0a 46 94 bb 82 e..OC....;..F... 00:22:29.556 00000120 29 28 f2 bf e5 e0 79 56 08 cb 1e 25 b7 36 fb f5 )(....yV...%.6.. 00:22:29.556 00000130 8e a4 bd f8 12 0e 34 ff cd 3a 55 b0 7a e0 8f 1b ......4..:U.z... 00:22:29.556 00000140 cd 3e b9 c2 56 b6 28 94 cb 6f a2 d9 70 cd 0c 5a .>..V.(..o..p..Z 00:22:29.556 00000150 c4 e3 2d fd 2a 8d ed 15 89 08 06 01 70 dc be 47 ..-.*.......p..G 00:22:29.556 00000160 6e 74 b0 a3 19 ea 84 eb bf 17 9b c8 ca c7 67 9b nt............g. 00:22:29.556 00000170 37 63 33 19 c2 62 2a 9d 45 b6 0a 56 f4 29 fc bc 7c3..b*.E..V.).. 00:22:29.556 00000180 3a 41 c0 92 5a d1 be 97 10 1d be 38 49 31 40 ce :A..Z......8I1@. 00:22:29.556 00000190 92 fd 53 ca e3 9f b3 27 8f f7 bc d2 9a 34 ab 71 ..S....'.....4.q 00:22:29.556 000001a0 2e 06 24 c9 b5 69 2d dd a1 c1 06 d3 0a 6b 8a c6 ..$..i-......k.. 00:22:29.556 000001b0 f3 7f 5a a3 02 90 9a c1 37 5d 05 11 4b 6a 23 24 ..Z.....7]..Kj#$ 00:22:29.556 000001c0 92 d2 bc cf d4 f3 96 14 50 6c e5 2a 23 51 03 da ........Pl.*#Q.. 00:22:29.556 000001d0 82 2a 22 b6 fb 99 0a ff fc 1b c4 5a 03 d3 bc ac .*"........Z.... 00:22:29.556 000001e0 f2 bf 02 8f cc 3a 9b 56 41 15 50 ff de 45 e3 d3 .....:.VA.P..E.. 00:22:29.556 000001f0 71 08 c6 a4 b5 75 b1 32 ce 79 60 bd 1f 50 83 94 q....u.2.y`..P.. 00:22:29.556 00000200 92 29 24 40 2c c2 fc 87 c7 55 6a da 1c 59 7e a7 .)$@,....Uj..Y~. 00:22:29.556 00000210 8f 0c fc 8d 22 07 c4 ad 10 43 91 47 99 d4 4c 57 ...."....C.G..LW 00:22:29.556 00000220 0c b2 1e 72 2b e1 b1 d3 7f 9d a7 9e 9e 6a ae a4 ...r+........j.. 00:22:29.556 00000230 69 2b 37 1a 44 82 c9 53 e1 96 73 b0 7e 44 54 18 i+7.D..S..s.~DT. 00:22:29.556 00000240 8b b6 5e 67 4d 60 24 f5 d9 55 81 87 e2 3e 60 22 ..^gM`$..U...>`" 00:22:29.556 00000250 33 01 c6 70 22 3b de 0c d5 a8 db 9f cf 51 18 37 3..p";.......Q.7 00:22:29.556 00000260 01 b7 f9 59 92 47 37 e6 e4 13 0d 27 be 07 26 84 ...Y.G7....'..&. 00:22:29.556 00000270 97 85 f2 6d a4 90 88 72 21 4e 02 10 51 37 20 af ...m...r!N..Q7 . 00:22:29.556 00000280 bf 65 4b 5c ea b1 ff 13 36 b8 34 8d bd 29 6a df .eK\....6.4..)j. 00:22:29.556 00000290 51 6c f9 13 bf 87 49 cc 95 58 92 b8 0a 90 b0 bf Ql....I..X...... 00:22:29.556 000002a0 49 95 5a 71 93 a4 93 b8 5b 3d 77 7e 36 a7 d1 0a I.Zq....[=w~6... 00:22:29.556 000002b0 a2 54 27 73 66 0a 8a 23 ff 82 f7 3f c1 b7 61 e2 .T'sf..#...?..a. 00:22:29.556 000002c0 f2 d1 3b ba a1 db ca 98 12 66 a2 70 06 da 9f e2 ..;......f.p.... 00:22:29.556 000002d0 56 8f 50 ab 4b 27 6c aa 1a 1d bc a1 c3 57 59 66 V.P.K'l......WYf 00:22:29.556 000002e0 78 95 47 3f 51 c4 ff d5 a0 a0 4e 40 76 dc c6 82 x.G?Q.....N@v... 00:22:29.556 000002f0 a2 31 56 bd 48 7c 4b a2 c4 87 1b 99 bc 99 db 59 .1V.H|K........Y 00:22:29.556 host pubkey: 00:22:29.556 00000000 b1 22 62 86 e1 2f 07 a4 14 3f 80 9e 3a 19 08 d4 ."b../...?..:... 00:22:29.556 00000010 6f 6d 81 41 12 68 e5 c7 70 e4 0f 39 51 2e 76 ba om.A.h..p..9Q.v. 00:22:29.556 00000020 e1 95 6a c2 07 13 22 81 74 61 55 ef 4f 2f ca df ..j...".taU.O/.. 00:22:29.556 00000030 4a c2 89 dc 21 08 d1 df 87 de fe 25 89 eb de 74 J...!......%...t 00:22:29.556 00000040 a8 35 4f 16 2e 79 d6 d3 57 f1 0a ce d6 a7 6c 5e .5O..y..W.....l^ 00:22:29.556 00000050 f8 7a 80 db 1d be 6b f9 2b 5f c8 0b ab 4a 95 a3 .z....k.+_...J.. 00:22:29.556 00000060 5b b7 02 11 95 11 5b cc d3 11 cb 14 f5 24 be 24 [.....[......$.$ 00:22:29.556 00000070 8b 37 67 b9 8e 0d d8 55 6e b5 a5 2f 6a 13 91 87 .7g....Un../j... 00:22:29.556 00000080 92 e0 22 9b d5 14 95 1e 5d e2 84 d4 9e f8 a7 6b ..".....]......k 00:22:29.556 00000090 e3 0a b2 c5 34 9c 0b aa 8e 5d 20 f9 03 ec 60 c8 ....4....] ...`. 00:22:29.556 000000a0 0b e6 22 8a d6 5e 1c 7a b5 9f 09 77 0c ff 16 76 .."..^.z...w...v 00:22:29.556 000000b0 18 21 ef 68 ee 8a 6a 14 e8 c0 5b 87 17 38 9e 00 .!.h..j...[..8.. 00:22:29.556 000000c0 eb 6a 52 14 f1 4b aa a1 f2 85 a7 ab 6c f7 f9 fe .jR..K......l... 00:22:29.556 000000d0 80 cd b2 bc 1d e7 4f c2 0a 0c d3 07 a1 f6 ec 45 ......O........E 00:22:29.556 000000e0 c3 7b 6d bc 11 36 ad d2 25 5d 91 2c ee 8c be 86 .{m..6..%].,.... 00:22:29.556 000000f0 88 b0 cc 50 e6 db 26 2e f7 00 09 91 e5 19 d3 2f ...P..&......../ 00:22:29.556 00000100 3f 72 80 83 f6 70 c1 c6 e1 f4 d1 d6 68 91 ba 98 ?r...p......h... 00:22:29.556 00000110 07 dd 19 1c 71 95 19 18 56 53 db 26 fe 42 13 43 ....q...VS.&.B.C 00:22:29.556 00000120 94 a7 aa 1e 49 db 6e a9 24 5e 50 27 26 6c 28 0e ....I.n.$^P'&l(. 00:22:29.556 00000130 1d 7b 6c 34 00 01 b7 3b ba 86 56 06 57 d2 b4 28 .{l4...;..V.W..( 00:22:29.556 00000140 4f e2 95 32 d5 fe d6 68 c8 9b db 8e 15 80 23 f6 O..2...h......#. 00:22:29.556 00000150 33 96 76 e1 e6 f7 1f ef be 37 5a 19 d7 bd f7 87 3.v......7Z..... 00:22:29.556 00000160 fe 07 01 71 d0 61 6d 2c c7 44 5d bf bb 6f ea fa ...q.am,.D]..o.. 00:22:29.556 00000170 23 1f c1 3c 5f 28 07 09 1f 4f 24 a1 dd 64 e7 79 #..<_(...O$..d.y 00:22:29.556 00000180 e0 a7 3a 65 07 d0 f1 ca 1b f8 ef e0 3d eb c8 41 ..:e........=..A 00:22:29.556 00000190 34 fe 18 0a 89 e8 b3 08 00 e9 fd 1d 1a df 5e 52 4.............^R 00:22:29.556 000001a0 c6 29 f8 17 87 22 c6 96 a0 67 ad 25 1b e8 d1 43 .)..."...g.%...C 00:22:29.556 000001b0 ea 53 d5 e7 2b 06 08 23 56 cc 6e ec 02 ff b8 6c .S..+..#V.n....l 00:22:29.556 000001c0 60 5e 25 7c 6d bc 1c ab 9f 30 90 46 d2 20 e8 02 `^%|m....0.F. .. 00:22:29.556 000001d0 04 91 77 3e ea 52 37 27 da 38 f1 93 f3 d8 9d 26 ..w>.R7'.8.....& 00:22:29.556 000001e0 64 a8 86 5b a3 e1 71 84 3b 1f ae 4e 1e e8 18 19 d..[..q.;..N.... 00:22:29.556 000001f0 75 f1 db 9f b4 cb 93 fb 87 ef b5 11 52 2f 29 00 u...........R/). 00:22:29.556 00000200 ca 50 eb d5 5f a1 7a e9 d1 15 ba 2a 95 95 12 7f .P.._.z....*.... 00:22:29.556 00000210 0f 64 17 0f 40 e7 39 7f e0 a9 b5 b1 12 55 23 fc .d..@.9......U#. 00:22:29.556 00000220 1c d9 9e 09 fd 3b a1 69 fd 0b 4d 3f 0f 80 e2 ea .....;.i..M?.... 00:22:29.556 00000230 3e fe 3c 21 0d 81 ce c6 4a a2 7e df 4a 0d 5e 6c >...t.3.. 00:22:29.556 000002b0 6f d8 60 60 c9 a5 02 70 59 69 44 40 38 84 1a c3 o.``...pYiD@8... 00:22:29.556 000002c0 fa cf e2 1c 68 1c c8 08 6f 5a 38 f8 16 54 83 52 ....h...oZ8..T.R 00:22:29.556 000002d0 13 fe 56 32 c6 e2 30 f3 5f 15 d7 3d e8 df 86 12 ..V2..0._..=.... 00:22:29.556 000002e0 18 fa 59 c2 fe 52 46 56 24 5e 42 e1 f9 7e 29 33 ..Y..RFV$^B..~)3 00:22:29.556 000002f0 ff 89 3d 5f 76 05 b2 e1 e7 e2 29 5e 65 f1 90 95 ..=_v.....)^e... 00:22:29.556 dh secret: 00:22:29.556 00000000 ff bc e0 11 4c 6b 6f 81 b8 e5 cf 98 4a 3f e5 80 ....Lko.....J?.. 00:22:29.556 00000010 59 e4 6d cd 18 35 5e c1 9f 1b fd 64 13 d5 f6 8b Y.m..5^....d.... 00:22:29.556 00000020 a8 ab 3d 59 e1 9d e6 0b 70 f2 b6 7e 70 4e 22 84 ..=Y....p..~pN". 00:22:29.556 00000030 b9 56 7f 72 31 7f d7 cc 70 b5 7b 5a ff b7 3a 5a .V.r1...p.{Z..:Z 00:22:29.556 00000040 ef 56 12 07 e5 bc 95 72 19 0d ca b7 70 e5 ae 9d .V.....r....p... 00:22:29.556 00000050 9b 1b 00 3f 35 3a 15 ea 45 09 86 69 dd 33 14 69 ...?5:..E..i.3.i 00:22:29.556 00000060 8b 57 dd 22 61 82 15 e4 2d 2f 97 2e ea 7f 24 98 .W."a...-/....$. 00:22:29.556 00000070 1e a1 fa 37 ad dc 58 82 77 8c ee 9c db e8 bd 54 ...7..X.w......T 00:22:29.556 00000080 a4 cb 08 b8 f7 35 48 4e 1f 71 b3 cb 2e a1 a6 83 .....5HN.q...... 00:22:29.556 00000090 eb 43 8d b4 8a b1 b2 bb 67 e5 f0 e4 75 05 3a af .C......g...u.:. 00:22:29.556 000000a0 02 1a 6a 1f 66 bd 77 d9 be 9f 94 79 16 55 e6 33 ..j.f.w....y.U.3 00:22:29.556 000000b0 a4 8f 0e 3b 98 87 51 dc d4 99 93 56 ff e1 be f6 ...;..Q....V.... 00:22:29.556 000000c0 ed 6b f8 a7 42 73 54 a0 fc c1 10 5d 92 88 ee 03 .k..BsT....].... 00:22:29.556 000000d0 dc 26 4e c3 33 05 97 d7 b1 01 c5 69 ec a7 df be .&N.3......i.... 00:22:29.556 000000e0 c0 18 66 e8 21 19 a2 d8 6e 9e c5 32 ee 9d d8 2a ..f.!...n..2...* 00:22:29.556 000000f0 2f 84 f3 0b 15 af 27 6d 28 e7 ec 29 29 80 7a 20 /.....'m(..)).z 00:22:29.556 00000100 54 15 f5 7a 6a 71 24 67 58 93 98 e4 66 b5 45 b6 T..zjq$gX...f.E. 00:22:29.556 00000110 e2 cb 48 38 ac 1c 59 7b 2c c2 88 94 ba 33 e6 d6 ..H8..Y{,....3.. 00:22:29.556 00000120 7c 5d 23 71 70 96 3c 98 6b 35 c0 a8 59 21 2a 33 |]#qp.<.k5..Y!*3 00:22:29.556 00000130 23 d1 ac 1d b9 7e b7 94 42 b6 83 f6 65 d2 c2 54 #....~..B...e..T 00:22:29.556 00000140 0a f5 db e0 45 ef 26 a9 38 f3 97 51 8c 1a 5c 1b ....E.&.8..Q..\. 00:22:29.556 00000150 06 fa d8 e2 f3 cd ff 58 c5 40 ce e8 51 05 49 2f .......X.@..Q.I/ 00:22:29.556 00000160 b9 33 8f 2c 8e 43 4e aa 31 b5 0b ec 27 92 b1 cf .3.,.CN.1...'... 00:22:29.557 00000170 2b 7e 69 80 29 95 2d 6a 09 b6 65 68 b7 a5 7a 1c +~i.).-j..eh..z. 00:22:29.557 00000180 71 7a 42 d7 43 4e c6 5c 5d d3 96 db 3d 1c 8d 54 qzB.CN.\]...=..T 00:22:29.557 00000190 9a 32 1e 1f 59 27 12 40 01 3a 6b 00 81 79 af ad .2..Y'.@.:k..y.. 00:22:29.557 000001a0 5e 3d ba c8 cc 61 5d b4 09 90 75 dd 3f 12 a1 8d ^=...a]...u.?... 00:22:29.557 000001b0 52 09 f0 c3 05 aa fe 57 67 78 31 f7 c0 c6 fe b3 R......Wgx1..... 00:22:29.557 000001c0 8e bf d3 e6 b5 3c 32 10 a4 b0 91 cc fb 03 e7 e9 .....<2......... 00:22:29.557 000001d0 7b d8 b5 52 60 d7 1e 97 46 ec 9f 7e 7f 01 06 9a {..R`...F..~.... 00:22:29.557 000001e0 d5 f9 c8 56 94 22 a9 ef e6 31 14 2a 58 41 3a 61 ...V."...1.*XA:a 00:22:29.557 000001f0 c3 c4 e4 b7 9c 98 28 ff 8b df 36 ac ea c7 ce 05 ......(...6..... 00:22:29.557 00000200 5f aa b0 44 5f 33 45 ff 25 56 49 95 eb 7b fb 35 _..D_3E.%VI..{.5 00:22:29.557 00000210 4d 66 c4 fc ba da f2 0b a7 0b 51 c6 b7 be 0e c9 Mf........Q..... 00:22:29.557 00000220 bf 59 58 9b 02 e0 cc f1 79 4c 9b b6 29 74 0f 5d .YX.....yL..)t.] 00:22:29.557 00000230 2e fe 48 68 f3 48 e6 cb 9c 32 0f be 0a 7c 9a a0 ..Hh.H...2...|.. 00:22:29.557 00000240 bf 29 de 37 dd 5f fe 73 18 32 f6 bf f1 a3 f0 12 .).7._.s.2...... 00:22:29.557 00000250 31 2f c6 d6 35 7e 92 78 d9 46 8b 84 52 76 f7 d0 1/..5~.x.F..Rv.. 00:22:29.557 00000260 a2 4b f3 2e ff 4e d2 d0 15 d2 5e d3 6b 86 5b 5c .K...N....^.k.[\ 00:22:29.557 00000270 bd 8e 19 ff e8 73 ca 22 09 59 e7 66 d0 87 4a 74 .....s.".Y.f..Jt 00:22:29.557 00000280 29 67 d6 fc f4 4e 8f eb e3 3c 5c f2 b1 f7 6d 58 )g...N...<\...mX 00:22:29.557 00000290 b3 78 58 7e 31 1e 77 47 21 bc bd 04 df b0 d5 ae .xX~1.wG!....... 00:22:29.557 000002a0 ce fe 65 7f 45 55 1f f6 71 3f df c5 2e 83 9c 42 ..e.EU..q?.....B 00:22:29.557 000002b0 1b a8 76 79 67 de 35 56 c3 64 2d f1 04 27 69 5f ..vyg.5V.d-..'i_ 00:22:29.557 000002c0 d0 05 70 f0 12 95 91 4b 5e d8 37 b9 33 47 1e 4c ..p....K^.7.3G.L 00:22:29.557 000002d0 2d a9 b0 1b af 7b ce f8 b0 79 b1 d6 2f 18 5a 1f -....{...y../.Z. 00:22:29.557 000002e0 d9 1e da 29 a4 64 11 60 d7 37 59 8f 0c 4b f2 82 ...).d.`.7Y..K.. 00:22:29.557 000002f0 d2 c2 7b ba 91 9b d2 6d 40 b4 c5 ca 04 db df 3c ..{....m@......< 00:22:29.557 [2024-09-27 13:27:26.409884] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key3, hash=3, dhgroup=4, seq=3775755310, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.557 [2024-09-27 13:27:26.410255] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.557 [2024-09-27 13:27:26.475375] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.557 [2024-09-27 13:27:26.476073] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.557 [2024-09-27 13:27:26.476425] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.557 [2024-09-27 13:27:26.476836] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.557 [2024-09-27 13:27:26.611595] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.557 [2024-09-27 13:27:26.611837] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.557 [2024-09-27 13:27:26.611956] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 4 (ffdhe6144) 00:22:29.557 [2024-09-27 13:27:26.612051] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.557 [2024-09-27 13:27:26.612272] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.557 ctrlr pubkey: 00:22:29.557 00000000 bf e7 cd 45 e9 b9 f2 5a 35 3f 4e b8 06 ba 75 e0 ...E...Z5?N...u. 00:22:29.557 00000010 a7 20 55 f3 eb d9 16 4f 83 06 da 74 b1 db 00 73 . U....O...t...s 00:22:29.557 00000020 c5 17 4b 1c db af 8c a1 07 47 3a 1e ee c0 62 05 ..K......G:...b. 00:22:29.557 00000030 29 a5 12 8a 5a a4 d4 4a f7 35 c0 12 a9 b5 4e b6 )...Z..J.5....N. 00:22:29.557 00000040 46 fa 02 d7 38 6f 5e 73 f3 33 d7 a1 3f 76 8d 36 F...8o^s.3..?v.6 00:22:29.557 00000050 c6 f1 f2 da 83 93 27 20 f9 3f 61 08 ac 16 0e f3 ......' .?a..... 00:22:29.557 00000060 d8 85 6c 82 48 a2 f9 68 84 db ec c2 78 be 56 d8 ..l.H..h....x.V. 00:22:29.557 00000070 30 20 ca 13 b7 7c 8b d8 34 f7 86 f7 cf 7e 1a 6a 0 ...|..4....~.j 00:22:29.557 00000080 f2 97 e2 7e 20 06 19 f9 d1 ef d1 04 b4 97 f5 4b ...~ ..........K 00:22:29.557 00000090 1c d0 18 86 c5 4f b1 34 bb 13 98 8a 8f 11 5a c5 .....O.4......Z. 00:22:29.557 000000a0 48 10 ed 9d 23 df 75 36 9b b4 7e 11 8a 1f d0 4f H...#.u6..~....O 00:22:29.557 000000b0 7a c0 f3 60 ed c0 46 a6 3b be 95 f9 2e e2 bf 3b z..`..F.;......; 00:22:29.557 000000c0 d0 ef ea 39 ab 0e 9c 6c d8 f0 f5 9d 32 ac 1c 37 ...9...l....2..7 00:22:29.557 000000d0 fa ae 80 1c 12 ad 69 45 55 f0 75 d5 8a e3 27 94 ......iEU.u...'. 00:22:29.557 000000e0 ac 5d 55 3a ec cd d3 a3 d7 6d 53 11 53 86 ba 96 .]U:.....mS.S... 00:22:29.557 000000f0 1b cd d3 3d 10 4b dc b2 70 0f ac fa 1a 4a c5 e8 ...=.K..p....J.. 00:22:29.557 00000100 36 64 c9 86 cf f1 ee b9 28 1a 91 c0 b7 14 7c fb 6d......(.....|. 00:22:29.557 00000110 82 c4 86 80 74 02 ff 70 12 5a 84 f8 26 72 e4 fd ....t..p.Z..&r.. 00:22:29.557 00000120 4d dc 17 82 ae 36 6a f3 36 69 dc ff 07 9c 7a 97 M....6j.6i....z. 00:22:29.557 00000130 c2 88 10 2d 59 ab 8d 8d 89 a5 85 56 a3 b4 8e 16 ...-Y......V.... 00:22:29.557 00000140 36 91 5a 70 78 f2 81 bf 85 3b d0 27 ec 1d 8d 9b 6.Zpx....;.'.... 00:22:29.557 00000150 d9 3b 38 9e aa e0 b3 78 80 6c 57 56 ee 39 12 ef .;8....x.lWV.9.. 00:22:29.557 00000160 ea ae 40 07 22 81 75 e4 74 21 2b ef 79 27 66 a4 ..@.".u.t!+.y'f. 00:22:29.557 00000170 07 30 ab e7 ce b7 be 77 01 a0 1a bd 35 13 83 79 .0.....w....5..y 00:22:29.557 00000180 0b 83 01 2b da 22 2b 72 bb 5e 34 fa 1e 90 ae f4 ...+."+r.^4..... 00:22:29.557 00000190 84 dc 19 6f ea 8b 80 ec 01 a8 87 22 fd 68 76 0d ...o.......".hv. 00:22:29.557 000001a0 aa 70 f8 a9 2c 04 99 81 76 b9 2f 68 1c ef 06 a0 .p..,...v./h.... 00:22:29.557 000001b0 42 ba b0 08 f0 a9 c0 36 b5 b4 3b b6 0a 90 23 3d B......6..;...#= 00:22:29.557 000001c0 5f fa 98 b9 34 c0 92 bb 29 d3 b0 0c 9d f0 dc d0 _...4...)....... 00:22:29.557 000001d0 aa 01 b7 17 bc 8f 48 db 27 b8 bd bb 3f 99 82 fd ......H.'...?... 00:22:29.557 000001e0 9d fd 37 f8 38 7e 89 d0 c9 b0 42 ca f3 ed 42 8c ..7.8~....B...B. 00:22:29.557 000001f0 11 db 24 6f d2 9e f9 01 56 22 dd 0e a1 c9 34 a9 ..$o....V"....4. 00:22:29.557 00000200 b9 b4 56 7f e5 88 97 c5 56 79 c1 94 96 1a 47 11 ..V.....Vy....G. 00:22:29.557 00000210 87 47 1b 1f 3b 1a 05 fd ea 17 49 d1 39 d7 3d 6a .G..;.....I.9.=j 00:22:29.557 00000220 1c 30 b8 b1 9d 1a 3b 8f 6c fa d5 24 08 82 d4 1f .0....;.l..$.... 00:22:29.557 00000230 ad 8d 43 16 a5 8e 35 68 92 57 92 24 fa 88 59 0a ..C...5h.W.$..Y. 00:22:29.557 00000240 6c 1c fe fa 3d 86 1c 57 b0 00 66 b7 3c 73 cb 00 l...=..W..f..S...b...#..]. 00:22:29.557 000001f0 6a 7e d9 cd 59 08 e7 76 ec a6 de a4 c5 a1 48 5b j~..Y..v......H[ 00:22:29.557 00000200 ab e2 73 51 53 9c 66 cc ad 59 3f 4e 15 67 4d e3 ..sQS.f..Y?N.gM. 00:22:29.557 00000210 94 92 db bc 08 42 65 99 d2 16 17 68 e8 f3 ac 32 .....Be....h...2 00:22:29.557 00000220 5f 83 1a 2f a0 1e bf 6e 50 ad ef 13 d8 e8 25 19 _../...nP.....%. 00:22:29.557 00000230 87 b9 41 64 af ff d0 1f df 2b 0a 6c ad 4c ac 2d ..Ad.....+.l.L.- 00:22:29.557 00000240 11 29 24 4c 8b 98 98 89 d0 c1 56 56 ac 67 3b 83 .)$L......VV.g;. 00:22:29.557 00000250 50 e3 e2 1c 53 be 4a 44 e4 6d 9d 3e e0 01 45 b9 P...S.JD.m.>..E. 00:22:29.557 00000260 56 9b 49 fb be 59 8d a5 8f 67 5a 6d 6b a4 00 93 V.I..Y...gZmk... 00:22:29.557 00000270 26 a0 2d 38 6e 2c 71 de cf 42 89 9b f7 32 32 a2 &.-8n,q..B...22. 00:22:29.557 00000280 1d 61 57 6f f6 c0 56 55 a0 a2 88 a0 e1 6b aa 70 .aWo..VU.....k.p 00:22:29.557 00000290 73 c1 96 20 10 98 04 ef 94 dd 58 fe 56 5d 14 d2 s.. ......X.V].. 00:22:29.557 000002a0 7b 82 44 80 f0 20 34 ce 5c c4 c2 23 ec 64 55 58 {.D.. 4.\..#.dUX 00:22:29.557 000002b0 1e ca 5c f8 70 1f 38 39 0d e6 e7 d7 a3 a1 db f6 ..\.p.89........ 00:22:29.557 000002c0 fd 0b 7a 8c c0 ac 3f a1 cc a1 9a 4a 61 0b 37 1d ..z...?....Ja.7. 00:22:29.557 000002d0 8b b2 4f c4 85 b7 3c 3f 5a ac 75 93 88 8b bc e9 ..O....=. 00:22:29.558 000001f0 ab 95 b0 df aa 36 a2 46 07 89 22 2d 85 ff 00 fe .....6.F.."-.... 00:22:29.558 00000200 2a 7a a2 70 e5 12 f4 48 b5 80 3f ec 04 77 1b 90 *z.p...H..?..w.. 00:22:29.558 00000210 c0 2d 27 f2 0d 4a 21 4e ce 08 c2 d1 ff 32 35 64 .-'..J!N.....25d 00:22:29.558 00000220 26 d3 09 10 ce b3 e9 82 1a 2f e4 7e db e3 b2 4a &......../.~...J 00:22:29.558 00000230 c0 bb 94 63 47 47 27 c4 79 5d a1 13 b0 fd 49 68 ...cGG'.y]....Ih 00:22:29.558 00000240 6b 9a bb 46 a7 84 96 69 dc 74 63 94 16 98 c6 9d k..F...i.tc..... 00:22:29.558 00000250 2d b3 21 2b f8 24 52 59 28 35 72 73 4e 25 b1 82 -.!+.$RY(5rsN%.. 00:22:29.558 00000260 df 7c 50 77 1e f2 5d a6 c3 85 6c 32 fa 34 76 24 .|Pw..]...l2.4v$ 00:22:29.558 00000270 15 4d 31 16 78 13 21 9e a6 7a fc 9b ef 38 dd d3 .M1.x.!..z...8.. 00:22:29.558 00000280 e9 e5 d9 2b bc 00 cf 06 c3 9c 9c 1c c4 8c e2 79 ...+...........y 00:22:29.558 00000290 42 a9 65 d6 03 4a af 42 70 eb 16 b7 af 2c 25 6e B.e..J.Bp....,%n 00:22:29.558 000002a0 fe af 0a 67 62 55 3e 7c 40 34 b1 22 25 8b a7 ab ...gbU>|@4."%... 00:22:29.558 000002b0 a7 27 4d d2 51 28 b4 51 37 6e a1 57 23 26 2e f6 .'M.Q(.Q7n.W#&.. 00:22:29.558 000002c0 87 af 66 61 0d 5a 22 ca 14 a3 35 47 8c c5 32 83 ..fa.Z"...5G..2. 00:22:29.558 000002d0 1d 4a 30 91 77 c7 8c 08 22 c6 c8 84 95 22 4c ef .J0.w..."...."L. 00:22:29.558 000002e0 94 49 0e 8f 68 f5 aa 0f d0 3b cf e9 7d d1 d8 d5 .I..h....;..}... 00:22:29.558 000002f0 1e d4 c8 c8 ca d7 aa 4d 88 5f 92 17 35 83 57 96 .......M._..5.W. 00:22:29.558 [2024-09-27 13:27:26.688588] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key4, hash=3, dhgroup=4, seq=3775755311, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.558 [2024-09-27 13:27:26.688862] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.558 [2024-09-27 13:27:26.742907] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.558 [2024-09-27 13:27:26.743249] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.558 [2024-09-27 13:27:26.743356] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.558 [2024-09-27 13:27:26.795567] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.558 [2024-09-27 13:27:26.795845] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.558 [2024-09-27 13:27:26.796010] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 4 (ffdhe6144) 00:22:29.558 [2024-09-27 13:27:26.796130] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.558 [2024-09-27 13:27:26.796353] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.558 ctrlr pubkey: 00:22:29.558 00000000 bf e7 cd 45 e9 b9 f2 5a 35 3f 4e b8 06 ba 75 e0 ...E...Z5?N...u. 00:22:29.558 00000010 a7 20 55 f3 eb d9 16 4f 83 06 da 74 b1 db 00 73 . U....O...t...s 00:22:29.558 00000020 c5 17 4b 1c db af 8c a1 07 47 3a 1e ee c0 62 05 ..K......G:...b. 00:22:29.558 00000030 29 a5 12 8a 5a a4 d4 4a f7 35 c0 12 a9 b5 4e b6 )...Z..J.5....N. 00:22:29.558 00000040 46 fa 02 d7 38 6f 5e 73 f3 33 d7 a1 3f 76 8d 36 F...8o^s.3..?v.6 00:22:29.558 00000050 c6 f1 f2 da 83 93 27 20 f9 3f 61 08 ac 16 0e f3 ......' .?a..... 00:22:29.558 00000060 d8 85 6c 82 48 a2 f9 68 84 db ec c2 78 be 56 d8 ..l.H..h....x.V. 00:22:29.558 00000070 30 20 ca 13 b7 7c 8b d8 34 f7 86 f7 cf 7e 1a 6a 0 ...|..4....~.j 00:22:29.558 00000080 f2 97 e2 7e 20 06 19 f9 d1 ef d1 04 b4 97 f5 4b ...~ ..........K 00:22:29.558 00000090 1c d0 18 86 c5 4f b1 34 bb 13 98 8a 8f 11 5a c5 .....O.4......Z. 00:22:29.558 000000a0 48 10 ed 9d 23 df 75 36 9b b4 7e 11 8a 1f d0 4f H...#.u6..~....O 00:22:29.558 000000b0 7a c0 f3 60 ed c0 46 a6 3b be 95 f9 2e e2 bf 3b z..`..F.;......; 00:22:29.558 000000c0 d0 ef ea 39 ab 0e 9c 6c d8 f0 f5 9d 32 ac 1c 37 ...9...l....2..7 00:22:29.558 000000d0 fa ae 80 1c 12 ad 69 45 55 f0 75 d5 8a e3 27 94 ......iEU.u...'. 00:22:29.558 000000e0 ac 5d 55 3a ec cd d3 a3 d7 6d 53 11 53 86 ba 96 .]U:.....mS.S... 00:22:29.558 000000f0 1b cd d3 3d 10 4b dc b2 70 0f ac fa 1a 4a c5 e8 ...=.K..p....J.. 00:22:29.558 00000100 36 64 c9 86 cf f1 ee b9 28 1a 91 c0 b7 14 7c fb 6d......(.....|. 00:22:29.558 00000110 82 c4 86 80 74 02 ff 70 12 5a 84 f8 26 72 e4 fd ....t..p.Z..&r.. 00:22:29.558 00000120 4d dc 17 82 ae 36 6a f3 36 69 dc ff 07 9c 7a 97 M....6j.6i....z. 00:22:29.558 00000130 c2 88 10 2d 59 ab 8d 8d 89 a5 85 56 a3 b4 8e 16 ...-Y......V.... 00:22:29.558 00000140 36 91 5a 70 78 f2 81 bf 85 3b d0 27 ec 1d 8d 9b 6.Zpx....;.'.... 00:22:29.558 00000150 d9 3b 38 9e aa e0 b3 78 80 6c 57 56 ee 39 12 ef .;8....x.lWV.9.. 00:22:29.558 00000160 ea ae 40 07 22 81 75 e4 74 21 2b ef 79 27 66 a4 ..@.".u.t!+.y'f. 00:22:29.558 00000170 07 30 ab e7 ce b7 be 77 01 a0 1a bd 35 13 83 79 .0.....w....5..y 00:22:29.558 00000180 0b 83 01 2b da 22 2b 72 bb 5e 34 fa 1e 90 ae f4 ...+."+r.^4..... 00:22:29.558 00000190 84 dc 19 6f ea 8b 80 ec 01 a8 87 22 fd 68 76 0d ...o.......".hv. 00:22:29.558 000001a0 aa 70 f8 a9 2c 04 99 81 76 b9 2f 68 1c ef 06 a0 .p..,...v./h.... 00:22:29.558 000001b0 42 ba b0 08 f0 a9 c0 36 b5 b4 3b b6 0a 90 23 3d B......6..;...#= 00:22:29.558 000001c0 5f fa 98 b9 34 c0 92 bb 29 d3 b0 0c 9d f0 dc d0 _...4...)....... 00:22:29.558 000001d0 aa 01 b7 17 bc 8f 48 db 27 b8 bd bb 3f 99 82 fd ......H.'...?... 00:22:29.558 000001e0 9d fd 37 f8 38 7e 89 d0 c9 b0 42 ca f3 ed 42 8c ..7.8~....B...B. 00:22:29.558 000001f0 11 db 24 6f d2 9e f9 01 56 22 dd 0e a1 c9 34 a9 ..$o....V"....4. 00:22:29.558 00000200 b9 b4 56 7f e5 88 97 c5 56 79 c1 94 96 1a 47 11 ..V.....Vy....G. 00:22:29.558 00000210 87 47 1b 1f 3b 1a 05 fd ea 17 49 d1 39 d7 3d 6a .G..;.....I.9.=j 00:22:29.558 00000220 1c 30 b8 b1 9d 1a 3b 8f 6c fa d5 24 08 82 d4 1f .0....;.l..$.... 00:22:29.558 00000230 ad 8d 43 16 a5 8e 35 68 92 57 92 24 fa 88 59 0a ..C...5h.W.$..Y. 00:22:29.558 00000240 6c 1c fe fa 3d 86 1c 57 b0 00 66 b7 3c 73 cb 00 l...=..W..f. 00:22:29.558 000000e0 14 0f 52 03 8c d0 0b 48 19 e0 68 d5 e6 40 8c 43 ..R....H..h..@.C 00:22:29.558 000000f0 51 48 5c f4 da ea f7 23 fe 98 47 8d 5c bc 04 e8 QH\....#..G.\... 00:22:29.558 00000100 48 61 46 48 f3 5b c7 99 7e 4f 76 51 3f d6 52 44 HaFH.[..~OvQ?.RD 00:22:29.558 00000110 8e 3b 6a 6e 34 c7 29 9e 62 60 70 e5 11 91 88 6c .;jn4.).b`p....l 00:22:29.558 00000120 9a 0f b8 2c 2a 54 f4 34 9f 6d 17 b7 37 87 60 7e ...,*T.4.m..7.`~ 00:22:29.558 00000130 ad 12 82 cd 1e eb bb bf ca 76 76 21 8e a6 40 8c .........vv!..@. 00:22:29.558 00000140 e3 bf e0 7d 14 b9 ad 8b 01 31 9e 47 61 4a 41 ca ...}.....1.GaJA. 00:22:29.558 00000150 a7 2f e9 6a 14 70 dd 44 d2 69 2a 0d 28 35 fb ce ./.j.p.D.i*.(5.. 00:22:29.558 00000160 2f 07 b4 89 3f ba 23 5f c1 3f e3 fc 9e 84 74 bc /...?.#_.?....t. 00:22:29.558 00000170 5d b4 41 1c 84 a8 22 4b 1c 5e 0e 97 47 8a 35 89 ].A..."K.^..G.5. 00:22:29.558 00000180 c4 ad 00 4d ad 5a 62 84 90 d8 77 98 2e 13 a5 a9 ...M.Zb...w..... 00:22:29.558 00000190 fe 7d 59 fa c7 da eb c5 53 77 62 18 21 01 d4 a3 .}Y.....Swb.!... 00:22:29.558 000001a0 cc 01 c4 73 5b ef 53 4a 9e c2 ca da f2 20 30 6d ...s[.SJ..... 0m 00:22:29.558 000001b0 a5 ae 59 ff 11 25 d5 db 31 29 39 68 fa 2d 50 cf ..Y..%..1)9h.-P. 00:22:29.558 000001c0 96 a8 d5 11 de 7e 95 02 ad 7d 7c f3 1a 21 e0 a1 .....~...}|..!.. 00:22:29.558 000001d0 bd 48 b3 ed f8 d9 f5 c9 00 ed e6 8e 21 24 7a 0d .H..........!$z. 00:22:29.558 000001e0 18 71 bf b3 81 51 5b 58 8e 97 3c 89 3b 91 45 1b .q...Q[X..<.;.E. 00:22:29.558 000001f0 43 21 77 0b c8 3f 96 6e aa 3a 65 af e5 08 ae 6f C!w..?.n.:e....o 00:22:29.558 00000200 c5 3c 59 e9 91 f3 f1 6a e6 26 40 4a 51 ae 5e e5 . 00:22:29.559 00000170 39 97 61 93 f4 03 c9 5b 43 14 83 d8 e5 c7 55 58 9.a....[C.....UX 00:22:29.559 00000180 39 0e 54 38 74 d5 bf 96 14 25 d7 28 a3 ea 1d 01 9.T8t....%.(.... 00:22:29.559 00000190 0c c0 d4 55 68 32 ee 32 8c 43 90 44 71 b4 55 53 ...Uh2.2.C.Dq.US 00:22:29.559 000001a0 eb e3 f3 8e 62 07 55 79 4b b5 98 d8 9f 22 3d d5 ....b.UyK...."=. 00:22:29.559 000001b0 42 e7 3a 3f 71 e1 a2 76 b7 0f 2c 03 7c 38 de 32 B.:?q..v..,.|8.2 00:22:29.559 000001c0 23 58 a8 8b c5 78 56 48 17 a4 6e 4e d5 b0 ec a3 #X...xVH..nN.... 00:22:29.559 000001d0 7c 8f 9e 5c c7 ed 20 99 9c 52 90 50 ce 80 c7 df |..\.. ..R.P.... 00:22:29.559 000001e0 50 08 c9 cf 2b 94 ad 20 fd f8 12 9c e2 2f de 61 P...+.. ...../.a 00:22:29.559 000001f0 61 e1 fb 9a cb 87 07 61 4a b5 c0 11 77 5a dd 28 a......aJ...wZ.( 00:22:29.559 00000200 87 b9 87 02 92 35 26 fa 63 9f 6a 01 eb fc 94 6d .....5&.c.j....m 00:22:29.559 00000210 f0 38 f9 07 f3 9d b4 96 5b 78 e3 40 55 01 15 d8 .8......[x.@U... 00:22:29.559 00000220 c8 b5 17 10 60 4f 33 07 05 b0 c0 3b 75 f6 ca 05 ....`O3....;u... 00:22:29.559 00000230 c1 77 19 5f 76 92 3e b8 29 bf b4 06 f3 4e 0e 2f .w._v.>.)....N./ 00:22:29.559 00000240 97 80 94 03 62 ba 1d c8 a9 6d 29 da c5 72 d0 16 ....b....m)..r.. 00:22:29.559 00000250 8b 81 9a 6a 8d 32 64 a7 36 66 39 c5 c2 89 2f 05 ...j.2d.6f9.../. 00:22:29.559 00000260 6c e6 24 b7 75 a6 db ab 4a 9f 6a bc 8f 72 c1 d7 l.$.u...J.j..r.. 00:22:29.559 00000270 26 67 74 8c b7 a8 5b 56 e1 88 19 30 ab 57 5c f6 >...[V...0.W\. 00:22:29.559 00000280 f1 1c ee 33 ed aa eb b1 4a d9 b3 ae b8 16 89 6a ...3....J......j 00:22:29.559 00000290 29 e6 17 39 22 21 4c de 25 0e 0a c8 98 11 83 2d )..9"!L.%......- 00:22:29.559 000002a0 bd 38 0b 97 96 6c a9 81 67 93 04 1b 49 28 07 12 .8...l..g...I(.. 00:22:29.559 000002b0 87 e0 c0 1e 64 0a bb 69 86 9e 9c a0 5d 74 5e dc ....d..i....]t^. 00:22:29.559 000002c0 75 be 0b bb 7f c6 e3 0e 37 65 5d 4b 85 de 9f 34 u.......7e]K...4 00:22:29.559 000002d0 1c 7d 2e 18 f7 39 cb f7 f4 63 8d f9 96 a3 31 e8 .}...9...c....1. 00:22:29.559 000002e0 19 49 17 13 03 dd fc fb 9f 90 a7 bb 45 aa c8 30 .I..........E..0 00:22:29.559 000002f0 b8 01 9c 9c 3c a8 69 10 53 b6 76 56 fd 56 33 98 ....<.i.S.vV.V3. 00:22:29.559 00000300 52 59 2b 37 ff d8 fa fc 92 f5 c8 5d 94 09 97 ec RY+7.......].... 00:22:29.559 00000310 ac c9 b5 38 ff 1b f4 66 ec 32 14 79 4c d2 58 7d ...8...f.2.yL.X} 00:22:29.559 00000320 13 f8 84 32 53 d3 08 28 00 b0 c1 9a 70 50 c8 9e ...2S..(....pP.. 00:22:29.559 00000330 cb a6 5a bd c5 94 4a b6 81 ac 50 a8 ce 55 8b 99 ..Z...J...P..U.. 00:22:29.559 00000340 09 4c 07 9b 44 0f c2 29 dd a5 32 73 32 60 b2 88 .L..D..)..2s2`.. 00:22:29.559 00000350 12 66 83 74 c1 48 0f 41 88 37 9c 55 94 60 38 d6 .f.t.H.A.7.U.`8. 00:22:29.559 00000360 a4 81 f5 cd a3 5a c9 83 85 ce 51 e7 0e 5c 28 70 .....Z....Q..\(p 00:22:29.559 00000370 5a ee de d1 6c 4b 60 91 1b 7b 36 9b f4 7d 52 ea Z...lK`..{6..}R. 00:22:29.559 00000380 86 88 33 a6 96 9e be 0f 8a d1 56 8c f3 85 0d 13 ..3.......V..... 00:22:29.559 00000390 ef da 90 71 9c af ed a1 0b 36 a9 92 cc 7a fe 63 ...q.....6...z.c 00:22:29.559 000003a0 4b da fa 7b d6 15 50 92 92 30 c7 b3 dd f4 46 f0 K..{..P..0....F. 00:22:29.559 000003b0 fc 97 9d 6c 6f 36 a2 83 71 9f 00 7c f0 46 59 46 ...lo6..q..|.FYF 00:22:29.559 000003c0 12 54 93 91 8b 35 f4 19 7b 03 81 78 78 2e 66 8e .T...5..{..xx.f. 00:22:29.559 000003d0 d9 a2 0e 7f c8 6e b5 e4 2a c2 2b 0c 1a 89 1c fa .....n..*.+..... 00:22:29.559 000003e0 31 a4 98 98 18 a4 db b4 4f ff ed 6e 44 ef 57 10 1.......O..nD.W. 00:22:29.559 000003f0 14 d6 aa 09 fe eb 8c c1 65 89 f9 05 5f 6b f7 fd ........e..._k.. 00:22:29.559 host pubkey: 00:22:29.559 00000000 e5 da 31 83 69 19 14 4b 99 09 3f c4 ed df 87 00 ..1.i..K..?..... 00:22:29.559 00000010 a5 8d f8 12 5b 9c d1 4f 78 c9 e0 0d 00 e4 d8 ea ....[..Ox....... 00:22:29.559 00000020 8e 75 c9 d7 70 28 d8 4f 6e 0a 65 52 74 b2 6e e0 .u..p(.On.eRt.n. 00:22:29.559 00000030 02 49 a1 fc 3f 51 fa 57 57 e4 f7 76 a1 08 cc 5e .I..?Q.WW..v...^ 00:22:29.559 00000040 08 c3 6a cd ec 4d d1 af c2 f3 fe 9e c3 0f 39 18 ..j..M........9. 00:22:29.559 00000050 89 30 ef 01 21 2a f8 61 f3 55 fe c8 25 02 f1 62 .0..!*.a.U..%..b 00:22:29.559 00000060 b3 13 de 07 c7 ac 81 f0 e7 3c e0 53 2b dd 91 ea .........<.S+... 00:22:29.559 00000070 10 3f 50 3a 23 78 c0 30 f1 a3 36 0d 56 49 65 bf .?P:#x.0..6.VIe. 00:22:29.559 00000080 11 41 ce 0a c8 31 9f 97 9e 13 cf 13 87 7b 42 73 .A...1.......{Bs 00:22:29.559 00000090 01 4a eb 27 ce 87 f3 6e f4 cc ab 8c 9c d8 e1 8d .J.'...n........ 00:22:29.559 000000a0 e5 8f 84 99 ba d8 1d a7 b4 83 4c 63 f8 bd bc eb ..........Lc.... 00:22:29.559 000000b0 2c d9 74 89 de df d1 1b 99 8f 18 5f 09 e6 1c 41 ,.t........_...A 00:22:29.559 000000c0 d4 ef de 71 df 29 67 fc b3 e7 ba bb 53 dc d4 c7 ...q.)g.....S... 00:22:29.559 000000d0 c8 4c 6a 36 83 08 45 3a 5b f9 8b 48 1b 07 f9 2a .Lj6..E:[..H...* 00:22:29.559 000000e0 22 aa d4 c6 72 02 8f 58 7b 0b 9d 82 3b f5 b0 94 "...r..X{...;... 00:22:29.559 000000f0 a4 16 6a 11 cf 36 65 5b c8 35 71 00 c3 6e a7 5e ..j..6e[.5q..n.^ 00:22:29.559 00000100 8e 4e 6e 2f 8f cc 65 50 7a 7b 55 c1 b5 c5 51 46 .Nn/..ePz{U...QF 00:22:29.559 00000110 ec 6e be 82 37 ea ff 04 fc 95 a7 e0 1c 58 4c 09 .n..7........XL. 00:22:29.559 00000120 56 db 46 33 50 b1 e6 51 0c bb 51 4b a1 86 24 2e V.F3P..Q..QK..$. 00:22:29.559 00000130 2c 5a 91 7e 45 5c 26 60 a9 c8 e8 34 69 20 a5 c6 ,Z.~E\&`...4i .. 00:22:29.559 00000140 7e 54 85 5d 90 1a e1 10 bf 50 fe f7 37 3c 8f 7e ~T.].....P..7<.~ 00:22:29.559 00000150 89 ac a0 5c ba a8 54 ac 7d aa 1e a9 53 65 3f 52 ...\..T.}...Se?R 00:22:29.559 00000160 31 67 71 df c8 90 65 07 09 6a 20 4d 4f c3 27 da 1gq...e..j MO.'. 00:22:29.559 00000170 0e 14 73 69 92 68 a9 49 96 49 34 9e 23 39 aa af ..si.h.I.I4.#9.. 00:22:29.559 00000180 6c b8 61 6b ee df 77 15 d3 7f 77 8f 78 4a 53 ea l.ak..w...w.xJS. 00:22:29.559 00000190 59 b6 06 f3 c1 36 d1 70 09 7e 6f 7d f8 e5 fa c7 Y....6.p.~o}.... 00:22:29.559 000001a0 62 96 5b 3b 2e d1 b9 e5 5a 87 e2 19 db 9e 63 da b.[;....Z.....c. 00:22:29.559 000001b0 1e 0f 05 0f 9b 26 d6 a1 c7 38 d8 3c 52 07 3c 7c .....&...8...Hf.+ 00:22:29.559 000001e0 48 8f a2 b0 e9 80 8f 2e 6e ec 16 ed e9 52 14 12 H.......n....R.. 00:22:29.559 000001f0 9f 3d 7d 8c 5c 97 d0 83 3a 45 1d 73 df ef 6a 45 .=}.\...:E.s..jE 00:22:29.559 00000200 52 90 c8 71 e8 3e 7c 94 6a 2b bd c9 2a fe a5 e5 R..q.>|.j+..*... 00:22:29.559 00000210 cb 60 6c f2 a1 15 90 a3 d9 b3 63 c5 b8 21 a3 b1 .`l.......c..!.. 00:22:29.559 00000220 11 dc 1c 54 cb 5b 2c db d5 62 1f a0 ee 3d 90 a3 ...T.[,..b...=.. 00:22:29.559 00000230 31 4f 2c 3f cc d6 47 e3 a9 ee 67 3f 54 18 d4 bc 1O,?..G...g?T... 00:22:29.559 00000240 0e 2b 6d 06 c0 d1 74 ff 98 ab a7 f5 3c c3 dc 1f .+m...t.....<... 00:22:29.559 00000250 21 8c 6d 00 f0 7c 13 1e 72 9b e9 c7 01 07 94 ac !.m..|..r....... 00:22:29.559 00000260 df 92 34 25 d1 37 c2 18 0e 0f 6d 15 88 6a 1d 9e ..4%.7....m..j.. 00:22:29.559 00000270 d2 93 f8 8f ef 69 4a 4e 85 12 f4 8f 20 d9 3d dc .....iJN.... .=. 00:22:29.559 00000280 ee 36 ae d0 88 66 26 0c 18 30 77 cf 88 e3 69 e3 .6...f&..0w...i. 00:22:29.559 00000290 92 7f 9a aa 99 74 01 da 09 fe 7f f8 92 7c 92 25 .....t.......|.% 00:22:29.559 000002a0 32 0a be d5 71 19 c1 09 de 13 39 8b 14 f3 0a 45 2...q.....9....E 00:22:29.559 000002b0 e6 1b 1c d4 94 17 03 1f 92 ea 9b 07 3f cd 56 9a ............?.V. 00:22:29.559 000002c0 42 0e a5 39 90 76 9a d5 45 e5 3d 38 5f f7 49 2e B..9.v..E.=8_.I. 00:22:29.559 000002d0 80 bd d5 c7 ba 7d 54 1d 39 28 96 bd 1e 71 4b ce .....}T.9(...qK. 00:22:29.559 000002e0 f5 a7 be 45 ac b4 f2 3f b9 2d d2 5b a8 94 9f ce ...E...?.-.[.... 00:22:29.559 000002f0 40 eb 0c e5 4f 14 98 a9 93 d5 10 f1 37 cc 48 8e @...O.......7.H. 00:22:29.559 00000300 6e ea b5 db 66 25 92 43 3d 93 fb e9 ca 52 8a b4 n...f%.C=....R.. 00:22:29.559 00000310 38 77 ae 6e 06 9e 83 71 9f 9e 25 10 f0 2c 3f 4e 8w.n...q..%..,?N 00:22:29.559 00000320 43 48 1b 3d 6a c3 c8 81 88 72 7b ba dd 00 09 a4 CH.=j....r{..... 00:22:29.559 00000330 c3 59 5a e5 3f ff 62 e1 8e 9a cc 03 2a 1d 0d 9b .YZ.?.b.....*... 00:22:29.559 00000340 85 90 1b cd 92 bd ab 50 92 6e 15 9a 15 ba 44 0b .......P.n....D. 00:22:29.559 00000350 98 a5 d1 60 e6 3a 77 9f 85 7b ce 6b b6 b0 66 f4 ...`.:w..{.k..f. 00:22:29.559 00000360 a9 93 98 0d 7c 25 b5 e9 9e e4 f9 61 9e 2e d0 0b ....|%.....a.... 00:22:29.559 00000370 ae da 6f 39 35 b0 ac 24 5d f6 77 9d 5e b6 47 2a ..o95..$].w.^.G* 00:22:29.559 00000380 61 62 05 bc 6c 10 4a 84 05 4f 08 15 32 c5 b5 d8 ab..l.J..O..2... 00:22:29.559 00000390 f4 f5 43 92 cb be c6 31 de be 10 72 da 42 54 ea ..C....1...r.BT. 00:22:29.559 000003a0 25 af 61 fe c4 6e fc d9 60 39 d9 bf 2c 73 c9 8d %.a..n..`9..,s.. 00:22:29.559 000003b0 6f 16 8b 7b 5d 3d 42 ae ff df 3f a0 4e 66 9d 1e o..{]=B...?.Nf.. 00:22:29.559 000003c0 9c c9 ed 3d 88 90 8d 2d 85 fd 99 c2 d6 ef 88 da ...=...-........ 00:22:29.559 000003d0 58 96 6f 44 c1 47 30 88 e3 a3 d1 2a fb a9 a7 7d X.oD.G0....*...} 00:22:29.559 000003e0 ab ae 97 e4 83 49 b4 eb 0f 1d a7 d0 79 af 0a b7 .....I......y... 00:22:29.559 000003f0 d5 b4 98 70 2c 61 ae 02 f5 2c 83 db 32 ab b2 e6 ...p,a...,..2... 00:22:29.559 dh secret: 00:22:29.559 00000000 cd 4b 01 aa 22 22 5a 36 f3 a1 9a 31 ae 36 c4 22 .K..""Z6...1.6." 00:22:29.559 00000010 6c f3 5c d6 c5 7d da ad 4d c4 b3 b9 eb de 80 0b l.\..}..M....... 00:22:29.559 00000020 db 46 69 72 0a 82 61 ef 24 20 1b 7c d8 8f 73 47 .Fir..a.$ .|..sG 00:22:29.559 00000030 66 ba b1 35 dd ba 06 03 bb 06 32 7b 53 06 de af f..5......2{S... 00:22:29.559 00000040 fa dd 1b 73 19 f3 7e d0 e8 57 4c 40 a9 3f 03 14 ...s..~..WL@.?.. 00:22:29.559 00000050 74 c1 cd 73 42 85 30 87 e2 ba ff 9f c7 a4 b7 43 t..sB.0........C 00:22:29.559 00000060 68 db 57 cf d9 e9 62 46 38 b5 4c 38 a8 9f 51 ce h.W...bF8.L8..Q. 00:22:29.559 00000070 be dc 84 9d 99 6b dc d9 b4 d6 91 a4 14 ec cd f9 .....k.......... 00:22:29.559 00000080 3b 9b d0 3f ea ed 95 2c 9a ef b1 9b 79 99 ee d8 ;..?...,....y... 00:22:29.559 00000090 52 3f 19 72 1c b3 d3 41 cb c2 1b 32 66 10 4e ca R?.r...A...2f.N. 00:22:29.559 000000a0 69 77 1a 30 bd fc ce d9 15 67 4e 88 12 1d 68 c4 iw.0.....gN...h. 00:22:29.559 000000b0 f9 59 0d 90 7b 40 4b f2 94 ac 07 13 c7 a6 6d 0b .Y..{@K.......m. 00:22:29.559 000000c0 73 7e 2d ea a9 e3 11 69 47 a4 7b db 12 98 9e 92 s~-....iG.{..... 00:22:29.559 000000d0 08 fd 13 b0 da c6 44 50 5f db e2 c2 49 92 28 bd ......DP_...I.(. 00:22:29.559 000000e0 8c c0 e7 a4 d0 4e 71 51 77 9c c7 2a d5 ec 9e cf .....NqQw..*.... 00:22:29.559 000000f0 9d 56 ea 63 33 34 3c a0 0a b1 b0 9b e8 f1 85 7c .V.c34<........| 00:22:29.559 00000100 6b d7 6d 6b 7a a1 a2 81 34 bb c9 36 b9 a4 b7 74 k.mkz...4..6...t 00:22:29.559 00000110 1c 43 aa ae b6 7f 5d c9 6b d4 4a b6 a0 be 5c a5 .C....].k.J...\. 00:22:29.559 00000120 99 9e 38 3a 40 36 00 02 07 97 7f 86 9d 46 6c 2c ..8:@6.......Fl, 00:22:29.559 00000130 cc 35 bb 2d 5f 1e ac d5 4a 2d a5 e4 47 4d ba ef .5.-_...J-..GM.. 00:22:29.559 00000140 6e 0f de 0e c7 b1 d0 d5 6f 6a 5f 81 09 60 49 9d n.......oj_..`I. 00:22:29.559 00000150 e9 28 f2 98 aa 9f 17 3b 58 78 ef 6b 2f 20 82 c0 .(.....;Xx.k/ .. 00:22:29.559 00000160 25 42 98 3c 31 ca ed bb 4c bb ab f9 2c 74 80 d2 %B.<1...L...,t.. 00:22:29.559 00000170 20 e7 ce c6 20 f5 d9 a3 6b e0 cb 40 87 b4 90 05 ... ...k..@.... 00:22:29.559 00000180 32 bf 2c 82 ee 6c d8 52 b1 3c 49 5b 9d 7f 0e bc 2.,..l.R........9 00:22:29.559 00000270 f6 b8 31 bc 55 ca 25 44 a5 c3 28 f6 0e a7 58 7e ..1.U.%D..(...X~ 00:22:29.559 00000280 d4 b0 73 d3 37 fd a6 db d0 18 45 5b 2f f3 9f 9c ..s.7.....E[/... 00:22:29.559 00000290 a8 3e 39 8f 49 1e 23 12 3d 33 d6 e9 ff ab 35 50 .>9.I.#.=3....5P 00:22:29.559 000002a0 a1 fa e8 47 52 24 a6 a0 e2 52 46 90 20 f8 8f 94 ...GR$...RF. ... 00:22:29.560 000002b0 63 57 b3 49 0e 0b 08 72 dc 91 7d ff fb 05 02 77 cW.I...r..}....w 00:22:29.560 000002c0 06 dd 6f c9 47 4c 59 ae 09 6f 28 97 2d 67 e2 2c ..o.GLY..o(.-g., 00:22:29.560 000002d0 80 6b 9a cc 32 a9 52 f4 1e 15 76 16 35 97 30 1e .k..2.R...v.5.0. 00:22:29.560 000002e0 07 6e e5 57 e4 87 70 fb a2 54 d2 82 3b 2f 38 28 .n.W..p..T..;/8( 00:22:29.560 000002f0 d5 9c f0 8f 9d 21 93 30 f3 f7 db ca 7b b0 51 39 .....!.0....{.Q9 00:22:29.560 00000300 c9 30 81 0b 06 34 e1 0e a4 4d bf d6 dc 2d 0d bc .0...4...M...-.. 00:22:29.560 00000310 4c 16 aa e8 12 e9 1d 8d a4 01 d7 49 b9 19 dd d2 L..........I.... 00:22:29.560 00000320 18 9c 3a 71 f1 90 0c a8 9c bb 63 bc d0 31 95 78 ..:q......c..1.x 00:22:29.560 00000330 32 a4 5e 92 88 9b 2f ee a6 14 5f a6 c5 1f 45 31 2.^.../..._...E1 00:22:29.560 00000340 7a 0b 7a b7 fb db 4f e3 ba f3 f1 15 d5 36 53 07 z.z...O......6S. 00:22:29.560 00000350 3a 1a 02 83 52 b3 02 36 f8 3e cb 8b e9 72 c3 ef :...R..6.>...r.. 00:22:29.560 00000360 08 8f 8a 6d 7b 28 56 5f 16 e4 21 f8 74 f4 d8 42 ...m{(V_..!.t..B 00:22:29.560 00000370 78 43 db 9f 40 fa 04 11 47 d3 30 12 43 ab 67 37 xC..@...G.0.C.g7 00:22:29.560 00000380 8f 69 15 37 99 d8 fa d6 d3 00 a0 bc c1 c1 90 7a .i.7...........z 00:22:29.560 00000390 cf 41 5e 55 0c 81 ab 70 e7 c8 46 5c 29 76 fe 9b .A^U...p..F\)v.. 00:22:29.560 000003a0 d5 cd 56 58 98 60 a7 96 de ca 66 10 6e 36 97 1c ..VX.`....f.n6.. 00:22:29.560 000003b0 fe 34 93 27 64 e3 7b df ad a7 4c d6 45 7a 98 c9 .4.'d.{...L.Ez.. 00:22:29.560 000003c0 64 a9 4e a3 25 06 e5 a4 b0 07 62 3d 65 bb 6c 98 d.N.%.....b=e.l. 00:22:29.560 000003d0 97 5d b1 b3 0c d8 58 60 49 c4 41 e5 43 c6 5f b1 .]....X`I.A.C._. 00:22:29.560 000003e0 55 aa 4f 81 19 33 8e 2b e4 ef e8 af 7a dc 28 6e U.O..3.+....z.(n 00:22:29.560 000003f0 1c 1e 2c db 71 b5 e3 f0 6f e5 d8 b1 d4 09 9b 59 ..,.q...o......Y 00:22:29.560 [2024-09-27 13:27:27.251851] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key0, hash=3, dhgroup=5, seq=3775755313, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.560 [2024-09-27 13:27:27.252219] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.560 [2024-09-27 13:27:27.345854] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.560 [2024-09-27 13:27:27.346422] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.560 [2024-09-27 13:27:27.346712] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.560 [2024-09-27 13:27:27.346959] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.560 [2024-09-27 13:27:27.397927] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.560 [2024-09-27 13:27:27.398086] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.560 [2024-09-27 13:27:27.398167] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 5 (ffdhe8192) 00:22:29.560 [2024-09-27 13:27:27.398356] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.560 [2024-09-27 13:27:27.398560] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.560 ctrlr pubkey: 00:22:29.560 00000000 f4 9c 59 7a 02 ed 84 d4 37 f4 13 7f 9f d5 0a 5e ..Yz....7......^ 00:22:29.560 00000010 b8 68 55 9b 3c 8a 91 c8 95 72 85 6b 9e 81 cb 0c .hU.<....r.k.... 00:22:29.560 00000020 33 46 1c e8 9a ca 07 df dd 20 66 19 37 0f c8 fd 3F....... f.7... 00:22:29.560 00000030 c2 12 d2 e7 bc 4b a1 24 fd fb 7c 4f ec 4f 97 1e .....K.$..|O.O.. 00:22:29.560 00000040 85 45 02 8c fc 70 47 5e 7e 18 b7 1c 8e 34 20 10 .E...pG^~....4 . 00:22:29.560 00000050 e3 74 a9 89 c4 99 ae af 0d 04 6e 94 75 99 5c 32 .t........n.u.\2 00:22:29.560 00000060 2c 2c 5b 16 82 fd 3f 64 db 28 7e cc b6 87 16 b3 ,,[...?d.(~..... 00:22:29.560 00000070 e9 fd ab 5d 42 99 30 d7 01 5b 20 4e 65 be 4a c5 ...]B.0..[ Ne.J. 00:22:29.560 00000080 42 f6 87 20 20 5d a1 d9 ae fc 68 bf ff 37 71 e4 B.. ]....h..7q. 00:22:29.560 00000090 74 76 22 17 1d 1d 13 5e a8 78 05 68 c2 84 79 aa tv"....^.x.h..y. 00:22:29.560 000000a0 3a cd 0e 6e 1e 5b 0b e0 7d bc b1 4f 1d 65 0c 36 :..n.[..}..O.e.6 00:22:29.560 000000b0 d7 6f 36 a6 74 05 e4 78 ed 20 b4 c4 1a d1 b7 d4 .o6.t..x. ...... 00:22:29.560 000000c0 65 5d 1c 93 c0 e1 18 cc f5 11 63 fd 68 b4 92 f9 e]........c.h... 00:22:29.560 000000d0 98 b8 2d 6a 83 0e e8 64 a2 80 d7 0d ef 7f a7 77 ..-j...d.......w 00:22:29.560 000000e0 01 e1 a3 2d 7d ef c2 b5 30 07 6e f1 5c 5a 37 4a ...-}...0.n.\Z7J 00:22:29.560 000000f0 ec d1 20 09 84 cc 5f d6 c7 1c 86 d8 5b 63 3b 49 .. ..._.....[c;I 00:22:29.560 00000100 ee fa 25 79 81 aa d0 bd 3f 48 de 65 e5 8c d7 a6 ..%y....?H.e.... 00:22:29.560 00000110 9e 2b b4 d5 f9 25 ee 82 88 2f 18 8f f7 38 2e f9 .+...%.../...8.. 00:22:29.560 00000120 db 0c 84 bb fd 26 44 aa a8 ba 79 7e 2c 11 a7 86 .....&D...y~,... 00:22:29.560 00000130 74 5a 2f d8 0d de 0f 6d 3a 95 be 1d 15 6c ab 5f tZ/....m:....l._ 00:22:29.560 00000140 5d 54 c1 db 09 0a d9 de 12 76 91 6a 43 0e 28 31 ]T.......v.jC.(1 00:22:29.560 00000150 32 7c cd b9 6e 2b 84 6d 1a a3 52 d0 44 3a 0a b1 2|..n+.m..R.D:.. 00:22:29.560 00000160 f9 f9 98 b3 c9 fa ab 0f 93 92 fc f4 89 40 82 3e .............@.> 00:22:29.560 00000170 39 97 61 93 f4 03 c9 5b 43 14 83 d8 e5 c7 55 58 9.a....[C.....UX 00:22:29.560 00000180 39 0e 54 38 74 d5 bf 96 14 25 d7 28 a3 ea 1d 01 9.T8t....%.(.... 00:22:29.560 00000190 0c c0 d4 55 68 32 ee 32 8c 43 90 44 71 b4 55 53 ...Uh2.2.C.Dq.US 00:22:29.560 000001a0 eb e3 f3 8e 62 07 55 79 4b b5 98 d8 9f 22 3d d5 ....b.UyK...."=. 00:22:29.560 000001b0 42 e7 3a 3f 71 e1 a2 76 b7 0f 2c 03 7c 38 de 32 B.:?q..v..,.|8.2 00:22:29.560 000001c0 23 58 a8 8b c5 78 56 48 17 a4 6e 4e d5 b0 ec a3 #X...xVH..nN.... 00:22:29.560 000001d0 7c 8f 9e 5c c7 ed 20 99 9c 52 90 50 ce 80 c7 df |..\.. ..R.P.... 00:22:29.560 000001e0 50 08 c9 cf 2b 94 ad 20 fd f8 12 9c e2 2f de 61 P...+.. ...../.a 00:22:29.560 000001f0 61 e1 fb 9a cb 87 07 61 4a b5 c0 11 77 5a dd 28 a......aJ...wZ.( 00:22:29.560 00000200 87 b9 87 02 92 35 26 fa 63 9f 6a 01 eb fc 94 6d .....5&.c.j....m 00:22:29.560 00000210 f0 38 f9 07 f3 9d b4 96 5b 78 e3 40 55 01 15 d8 .8......[x.@U... 00:22:29.560 00000220 c8 b5 17 10 60 4f 33 07 05 b0 c0 3b 75 f6 ca 05 ....`O3....;u... 00:22:29.560 00000230 c1 77 19 5f 76 92 3e b8 29 bf b4 06 f3 4e 0e 2f .w._v.>.)....N./ 00:22:29.560 00000240 97 80 94 03 62 ba 1d c8 a9 6d 29 da c5 72 d0 16 ....b....m)..r.. 00:22:29.560 00000250 8b 81 9a 6a 8d 32 64 a7 36 66 39 c5 c2 89 2f 05 ...j.2d.6f9.../. 00:22:29.560 00000260 6c e6 24 b7 75 a6 db ab 4a 9f 6a bc 8f 72 c1 d7 l.$.u...J.j..r.. 00:22:29.560 00000270 26 67 74 8c b7 a8 5b 56 e1 88 19 30 ab 57 5c f6 >...[V...0.W\. 00:22:29.560 00000280 f1 1c ee 33 ed aa eb b1 4a d9 b3 ae b8 16 89 6a ...3....J......j 00:22:29.560 00000290 29 e6 17 39 22 21 4c de 25 0e 0a c8 98 11 83 2d )..9"!L.%......- 00:22:29.560 000002a0 bd 38 0b 97 96 6c a9 81 67 93 04 1b 49 28 07 12 .8...l..g...I(.. 00:22:29.560 000002b0 87 e0 c0 1e 64 0a bb 69 86 9e 9c a0 5d 74 5e dc ....d..i....]t^. 00:22:29.560 000002c0 75 be 0b bb 7f c6 e3 0e 37 65 5d 4b 85 de 9f 34 u.......7e]K...4 00:22:29.560 000002d0 1c 7d 2e 18 f7 39 cb f7 f4 63 8d f9 96 a3 31 e8 .}...9...c....1. 00:22:29.560 000002e0 19 49 17 13 03 dd fc fb 9f 90 a7 bb 45 aa c8 30 .I..........E..0 00:22:29.560 000002f0 b8 01 9c 9c 3c a8 69 10 53 b6 76 56 fd 56 33 98 ....<.i.S.vV.V3. 00:22:29.560 00000300 52 59 2b 37 ff d8 fa fc 92 f5 c8 5d 94 09 97 ec RY+7.......].... 00:22:29.560 00000310 ac c9 b5 38 ff 1b f4 66 ec 32 14 79 4c d2 58 7d ...8...f.2.yL.X} 00:22:29.560 00000320 13 f8 84 32 53 d3 08 28 00 b0 c1 9a 70 50 c8 9e ...2S..(....pP.. 00:22:29.560 00000330 cb a6 5a bd c5 94 4a b6 81 ac 50 a8 ce 55 8b 99 ..Z...J...P..U.. 00:22:29.560 00000340 09 4c 07 9b 44 0f c2 29 dd a5 32 73 32 60 b2 88 .L..D..)..2s2`.. 00:22:29.560 00000350 12 66 83 74 c1 48 0f 41 88 37 9c 55 94 60 38 d6 .f.t.H.A.7.U.`8. 00:22:29.560 00000360 a4 81 f5 cd a3 5a c9 83 85 ce 51 e7 0e 5c 28 70 .....Z....Q..\(p 00:22:29.560 00000370 5a ee de d1 6c 4b 60 91 1b 7b 36 9b f4 7d 52 ea Z...lK`..{6..}R. 00:22:29.560 00000380 86 88 33 a6 96 9e be 0f 8a d1 56 8c f3 85 0d 13 ..3.......V..... 00:22:29.560 00000390 ef da 90 71 9c af ed a1 0b 36 a9 92 cc 7a fe 63 ...q.....6...z.c 00:22:29.560 000003a0 4b da fa 7b d6 15 50 92 92 30 c7 b3 dd f4 46 f0 K..{..P..0....F. 00:22:29.560 000003b0 fc 97 9d 6c 6f 36 a2 83 71 9f 00 7c f0 46 59 46 ...lo6..q..|.FYF 00:22:29.560 000003c0 12 54 93 91 8b 35 f4 19 7b 03 81 78 78 2e 66 8e .T...5..{..xx.f. 00:22:29.560 000003d0 d9 a2 0e 7f c8 6e b5 e4 2a c2 2b 0c 1a 89 1c fa .....n..*.+..... 00:22:29.560 000003e0 31 a4 98 98 18 a4 db b4 4f ff ed 6e 44 ef 57 10 1.......O..nD.W. 00:22:29.560 000003f0 14 d6 aa 09 fe eb 8c c1 65 89 f9 05 5f 6b f7 fd ........e..._k.. 00:22:29.560 host pubkey: 00:22:29.560 00000000 f8 b6 13 cc d6 3a e6 18 a8 58 69 e0 50 87 b4 18 .....:...Xi.P... 00:22:29.560 00000010 da cb f2 74 87 df 35 31 67 54 be 7d 58 34 db c7 ...t..51gT.}X4.. 00:22:29.560 00000020 2e 6f 8a 45 ba 3f bd 59 d6 71 04 19 06 4d a0 5f .o.E.?.Y.q...M._ 00:22:29.560 00000030 c6 b1 ed eb 7b 61 d5 8a 1b a8 f7 68 ee 92 14 fe ....{a.....h.... 00:22:29.560 00000040 4d cf 7a 61 f6 e1 4b c9 37 3d 8a 42 3b dd 61 1f M.za..K.7=.B;.a. 00:22:29.560 00000050 15 8f 9a 7a 8a 81 bf c3 6b 3c 3a 7d d0 a0 92 c5 ...z....k<:}.... 00:22:29.560 00000060 75 5e ec c8 ef 1a c5 52 f4 45 2f 95 5f a5 8f 69 u^.....R.E/._..i 00:22:29.560 00000070 a0 25 ab 0f b5 57 fe d4 8c 5c 5c 0d b6 c5 ca b8 .%...W...\\..... 00:22:29.560 00000080 92 8b 3d 4d 0a 72 82 a3 36 26 ed 06 80 0a bf 64 ..=M.r..6&.....d 00:22:29.560 00000090 97 ac ca 43 f1 0c da eb e2 e9 e1 7f 1b 52 30 38 ...C.........R08 00:22:29.560 000000a0 c3 db cd b6 bc 0e 64 ac c2 a7 a9 1c ec 1d c0 c9 ......d......... 00:22:29.560 000000b0 4d 88 df ee e4 a8 71 d9 7c 30 cb db 7c cf 47 25 M.....q.|0..|.G% 00:22:29.560 000000c0 29 19 fc d5 5e 66 ab d3 c9 14 c3 32 0b 1f f9 3d )...^f.....2...= 00:22:29.560 000000d0 07 32 fe c4 a9 5a 97 24 84 94 d6 c4 16 cc 47 f9 .2...Z.$......G. 00:22:29.560 000000e0 6f 68 77 ef d2 c1 94 b1 df 0e 6e 23 92 0a 9a 90 ohw.......n#.... 00:22:29.560 000000f0 bf 29 be a2 40 3e fe 53 17 1f 7b b6 9d 74 f2 b1 .)..@>.S..{..t.. 00:22:29.560 00000100 5e bc fe 15 b6 d9 36 84 c6 9d ff 66 36 d4 f4 1a ^.....6....f6... 00:22:29.560 00000110 f9 64 68 65 f7 77 79 42 5c 1c 3c 7a 23 41 83 61 .dhe.wyB\....A.. 00:22:29.560 00000150 af 6e 07 22 83 57 de eb df 89 74 a5 d2 6d 69 46 .n.".W....t..miF 00:22:29.560 00000160 79 9a 83 77 48 c8 09 51 50 13 81 cf 29 a8 10 5d y..wH..QP...)..] 00:22:29.560 00000170 b8 8d 9c 96 49 f9 44 3f 86 9e d4 f6 5e 65 a0 57 ....I.D?....^e.W 00:22:29.560 00000180 09 3d 3d 13 5b f8 4b db 8c 70 83 ee be 97 e6 ff .==.[.K..p...... 00:22:29.560 00000190 ee 78 d9 5a c5 cf 85 0c be 83 ad a8 25 32 12 7c .x.Z........%2.| 00:22:29.560 000001a0 0f 10 d1 6f 29 0b 70 10 ee 4b 42 d2 d0 a8 81 27 ...o).p..KB....' 00:22:29.560 000001b0 5b 63 41 fc eb 61 af c6 48 8e c1 46 fe 2c a6 eb [cA..a..H..F.,.. 00:22:29.560 000001c0 f8 05 bb 34 75 26 c6 b1 99 e0 37 e0 84 bd 3e 3f ...4u&....7...>? 00:22:29.560 000001d0 08 20 b7 05 b7 62 b2 9e eb 09 91 97 7b f2 a2 c4 . ...b......{... 00:22:29.560 000001e0 61 89 7d 71 9b 9d 4a c9 b6 18 5c cc 55 08 5b 50 a.}q..J...\.U.[P 00:22:29.560 000001f0 2d 6a 93 28 3c 87 2f 4d ff 1c 70 c5 30 f5 43 89 -j.(<./M..p.0.C. 00:22:29.560 00000200 61 82 5d d8 8d 11 f9 a4 9a e3 15 e7 39 4c ab b5 a.].........9L.. 00:22:29.560 00000210 16 f3 3c 11 b5 05 8d 91 d3 88 4e a6 6b 07 79 a1 ..<.......N.k.y. 00:22:29.560 00000220 6a 70 70 b0 06 98 4d 50 d1 03 9d 1a db 81 64 51 jpp...MP......dQ 00:22:29.560 00000230 0a c8 88 f7 ad 37 37 3a fc 4e dc 21 94 03 21 07 .....77:.N.!..!. 00:22:29.560 00000240 f2 5b e1 d0 48 e1 ac 11 86 c9 8c 5b 59 c5 54 69 .[..H......[Y.Ti 00:22:29.560 00000250 b5 4f ea c1 99 51 e8 71 b3 fa 84 29 6e 33 8d 1c .O...Q.q...)n3.. 00:22:29.560 00000260 6f 7f 02 6f f0 f1 1e 3e 28 41 00 cd f3 a9 33 65 o..o...>(A....3e 00:22:29.560 00000270 b5 43 14 93 bd 91 22 2e 9b 91 ca 10 d4 51 36 ed .C...."......Q6. 00:22:29.560 00000280 be 10 51 15 38 c1 f8 a7 bd e1 02 16 31 22 69 b1 ..Q.8.......1"i. 00:22:29.560 00000290 20 91 70 bc 69 df 63 c9 eb 87 69 e1 e8 05 74 8b .p.i.c...i...t. 00:22:29.560 000002a0 b9 a7 4f 8d d6 1f 47 f4 2a 14 96 a2 d7 a0 c6 85 ..O...G.*....... 00:22:29.560 000002b0 5c 8d ba ef 41 1f a2 4b 99 32 1e 01 0d 24 33 1c \...A..K.2...$3. 00:22:29.560 000002c0 21 88 42 02 01 78 04 95 d0 0f e8 bd c5 f4 4a 13 !.B..x........J. 00:22:29.560 000002d0 0c 45 1d 8b e2 75 aa c4 71 18 e7 9c 39 2a 01 99 .E...u..q...9*.. 00:22:29.560 000002e0 ea f3 25 90 16 fc 84 ce b8 d8 bf 42 09 ef cf 43 ..%........B...C 00:22:29.560 000002f0 41 98 7b 7f 52 16 d4 9b bb ae bf 73 63 4d 73 d8 A.{.R......scMs. 00:22:29.560 00000300 ae 3f e5 5e 7f 2b 37 4a df a5 cb 55 4b 49 4c 8c .?.^.+7J...UKIL. 00:22:29.560 00000310 d4 b3 0f ab 07 41 77 be 0e bc fe 68 3f 1e b0 64 .....Aw....h?..d 00:22:29.560 00000320 91 05 ed 20 05 bc a4 6a b0 68 58 f7 df b6 5d e4 ... ...j.hX...]. 00:22:29.560 00000330 45 54 f4 ee 69 9c 45 57 06 90 f8 e8 63 12 a1 78 ET..i.EW....c..x 00:22:29.560 00000340 b4 b2 e3 17 fb 4d b6 a1 8f fc 85 6d 47 e4 f2 a8 .....M.....mG... 00:22:29.560 00000350 07 f0 4e 30 d6 9e b5 36 99 0d 97 53 77 52 f5 02 ..N0...6...SwR.. 00:22:29.560 00000360 1b 89 df c0 70 0f 36 0b 60 97 f9 f8 50 27 2a fe ....p.6.`...P'*. 00:22:29.560 00000370 7f 1d 4c 38 a7 e1 23 c0 fd 36 9f 87 42 e2 e7 a2 ..L8..#..6..B... 00:22:29.560 00000380 d5 b1 91 e5 ef df 50 ea ad e2 83 dd dc 0f 74 1d ......P.......t. 00:22:29.560 00000390 cd a5 3f ec 15 8f 56 67 e4 50 77 8b cf e3 64 78 ..?...Vg.Pw...dx 00:22:29.560 000003a0 10 10 6d a1 fa 72 ed 2a 06 0d b5 c5 74 73 66 1c ..m..r.*....tsf. 00:22:29.560 000003b0 c1 b2 e4 2b ab f2 ac 8c 00 73 88 04 5b 70 13 da ...+.....s..[p.. 00:22:29.560 000003c0 e1 25 79 db 73 75 e3 d2 5c fe 75 20 62 d7 ac 1e .%y.su..\.u b... 00:22:29.560 000003d0 9d 53 4b 35 80 69 27 e1 c8 2b bc 66 a9 5f 80 de .SK5.i'..+.f._.. 00:22:29.560 000003e0 c7 e1 ff c7 9c 33 c9 45 11 15 b4 10 d7 a4 8f 29 .....3.E.......) 00:22:29.560 000003f0 2e 70 16 fa c8 0c 49 21 87 61 70 28 b7 f7 fc e6 .p....I!.ap(.... 00:22:29.560 dh secret: 00:22:29.560 00000000 76 e3 28 de bc 2d 96 90 80 43 3e 59 73 29 ba 55 v.(..-...C>Ys).U 00:22:29.560 00000010 b5 88 df b1 ee b3 7e a1 00 8b 22 ef fb 7c e9 bc ......~..."..|.. 00:22:29.560 00000020 75 e5 96 55 09 35 e6 8a 1d a1 32 a0 c4 ba 94 eb u..U.5....2..... 00:22:29.560 00000030 86 34 32 1c 88 48 ff 66 27 6e dd 92 28 2e 22 9c .42..H.f'n..(.". 00:22:29.560 00000040 49 b4 53 36 47 e1 86 13 36 41 d3 40 35 bb 42 80 I.S6G...6A.@5.B. 00:22:29.560 00000050 b5 cb d6 e0 fe b5 97 44 59 f0 80 11 e0 d8 76 54 .......DY.....vT 00:22:29.560 00000060 cf a2 aa d5 a2 50 af 63 35 7a 31 34 cc 20 9e 94 .....P.c5z14. .. 00:22:29.560 00000070 bd f3 b1 74 b1 2c d2 53 d6 33 37 a1 b7 74 96 20 ...t.,.S.37..t. 00:22:29.560 00000080 fa 2f b5 bb 3d 68 b1 02 a9 5d 3b e6 b7 78 21 8e ./..=h...];..x!. 00:22:29.560 00000090 f7 89 ed e4 be 81 7d 6c fc 5e 3d 0b 68 2c 3a 8a ......}l.^=.h,:. 00:22:29.560 000000a0 e6 98 55 7c bc fb ac c9 0f ec bf 37 36 5c ec f1 ..U|.......76\.. 00:22:29.560 000000b0 b1 6c dc 9a 5b fc fc 8c 5b f9 c6 58 30 eb c0 f0 .l..[...[..X0... 00:22:29.560 000000c0 35 84 cf c0 56 8b fa f6 dc 9c 22 82 c3 68 5e b7 5...V....."..h^. 00:22:29.560 000000d0 ad a4 e2 78 6d d0 a4 61 ae ab 7b fc 9c 2b c5 3c ...xm..a..{..+.< 00:22:29.560 000000e0 fb 64 51 f7 57 cd c0 78 c6 e0 3d 8c 7f d6 3d 1a .dQ.W..x..=...=. 00:22:29.560 000000f0 a5 ff 5c b6 e7 69 dc 28 d7 f4 47 71 01 dd 11 37 ..\..i.(..Gq...7 00:22:29.560 00000100 d0 6a 88 23 8e 50 88 ba 3e 5e 85 f1 ea 13 ee 59 .j.#.P..>^.....Y 00:22:29.560 00000110 85 86 92 6a b1 7b dc a8 71 1c 88 9d 2e e5 1e b0 ...j.{..q....... 00:22:29.561 00000120 4b 4f de 41 f0 6c f6 49 47 52 f7 51 7d c9 33 fb KO.A.l.IGR.Q}.3. 00:22:29.561 00000130 af 65 a3 41 22 d8 3a 6a 88 3b 5a a2 b3 8e 9d 2b .e.A".:j.;Z....+ 00:22:29.561 00000140 9f a7 4e 13 58 46 47 b3 0f 11 77 d6 e8 49 86 66 ..N.XFG...w..I.f 00:22:29.561 00000150 bd 5c 0c c9 6d 4d be be 0b 03 78 44 e3 c1 d9 fb .\..mM....xD.... 00:22:29.561 00000160 4c b1 6c ed 23 34 df 4e c0 38 49 1f 70 eb d3 57 L.l.#4.N.8I.p..W 00:22:29.561 00000170 8c b4 8e 19 68 74 1e 2c 8b ba b9 d3 2f a7 35 d8 ....ht.,..../.5. 00:22:29.561 00000180 a0 19 86 4f bf c5 59 a3 16 19 68 66 82 75 01 23 ...O..Y...hf.u.# 00:22:29.561 00000190 61 6f a3 9d 5f de 87 7f 84 05 81 96 c4 2b 28 ae ao.._........+(. 00:22:29.561 000001a0 5b 36 df cb 08 78 26 e8 64 1d 60 9c 53 e9 ae ea [6...x&.d.`.S... 00:22:29.561 000001b0 20 a8 3c b9 c0 95 ee 60 bb be 71 0b 90 e6 51 ec .<....`..q...Q. 00:22:29.561 000001c0 44 fc 18 21 fd 45 5a c6 59 3b b3 72 5d 1e df 0d D..!.EZ.Y;.r]... 00:22:29.561 000001d0 54 09 11 31 9d 70 06 ae f8 18 34 93 d5 e1 4a a2 T..1.p....4...J. 00:22:29.561 000001e0 8e 29 91 84 89 c4 66 dc ed 52 a2 5a 8d 22 f6 7d .)....f..R.Z.".} 00:22:29.561 000001f0 60 2c 65 19 93 7a 5d 6a 37 91 80 83 64 c2 c7 c9 `,e..z]j7...d... 00:22:29.561 00000200 49 9f b4 36 52 16 87 7b 2a 06 1f 70 c6 a3 75 00 I..6R..{*..p..u. 00:22:29.561 00000210 6d e8 47 9d c7 c7 74 c6 6a d5 6f 66 ca 5d fa f7 m.G...t.j.of.].. 00:22:29.561 00000220 bc 04 70 b3 04 90 15 d3 78 48 c1 3d 77 1d 43 1c ..p.....xH.=w.C. 00:22:29.561 00000230 d4 b4 76 dd b4 98 e9 73 13 ca b2 26 24 5d 6b 8f ..v....s...&$]k. 00:22:29.561 00000240 3d 33 14 b3 17 57 df 55 8c 6f f2 b5 f0 29 9c 97 =3...W.U.o...).. 00:22:29.561 00000250 8e d6 a1 c6 a6 7e d4 e4 be 21 66 10 b1 ca c9 78 .....~...!f....x 00:22:29.561 00000260 e9 d0 90 e9 12 f0 4b ee bc 18 b0 f7 3a 9d da c9 ......K.....:... 00:22:29.561 00000270 9d ae cc 02 82 99 4c b6 3c b8 95 ce 7f fe 23 1f ......L.<.....#. 00:22:29.561 00000280 e5 d3 e6 17 df 38 ea e2 e1 38 ae 55 bf df 9c 46 .....8...8.U...F 00:22:29.561 00000290 fa 14 9f 77 98 16 76 27 a5 ee af 43 03 69 14 9f ...w..v'...C.i.. 00:22:29.561 000002a0 dc 2b 6c a8 01 68 20 87 17 fa 54 3b 53 2f 34 68 .+l..h ...T;S/4h 00:22:29.561 000002b0 bc 31 77 f1 f2 c8 64 5c 67 13 7b 7b c6 ad 6c bc .1w...d\g.{{..l. 00:22:29.561 000002c0 ff de 06 84 bd 33 58 e0 27 e1 76 8c b0 57 52 5c .....3X.'.v..WR\ 00:22:29.561 000002d0 fe 2e 09 2d da ec ac 72 42 2f 54 2b 1b 8b a5 5f ...-...rB/T+..._ 00:22:29.561 000002e0 f0 3a a7 6e 7e 9f 9e bd 41 05 46 8d 16 ce 6d 06 .:.n~...A.F...m. 00:22:29.561 000002f0 2c 7d aa bc 01 af 06 19 2f a1 51 c0 ba 1c 1d db ,}....../.Q..... 00:22:29.561 00000300 c0 9f 52 4e f3 2c 28 38 03 71 6e 92 e3 4e 6d c6 ..RN.,(8.qn..Nm. 00:22:29.561 00000310 08 53 71 44 16 15 14 de b3 b9 bd a0 18 70 ff 91 .SqD.........p.. 00:22:29.561 00000320 b0 03 76 28 d3 a6 0c 54 a3 47 75 60 d6 69 66 57 ..v(...T.Gu`.ifW 00:22:29.561 00000330 38 01 49 36 40 98 9f b4 e9 ba b6 cc 69 1f 96 26 8.I6@.......i..& 00:22:29.561 00000340 94 eb 3b 25 13 ea 46 fa e2 77 a5 14 1b 62 04 9c ..;%..F..w...b.. 00:22:29.561 00000350 1a 30 c7 44 0e 23 01 18 6b 7b 8f d0 43 1f 33 e3 .0.D.#..k{..C.3. 00:22:29.561 00000360 95 bc a3 26 b0 45 1d 1d 17 f1 a5 39 a8 5a c3 fa ...&.E.....9.Z.. 00:22:29.561 00000370 5d 13 9a 71 1e c0 f8 71 27 3e eb 7e d7 f3 73 a7 ]..q...q'>.~..s. 00:22:29.561 00000380 e2 c2 cb 1b 86 4d 73 c9 2d 49 6e c9 a6 cd 01 a6 .....Ms.-In..... 00:22:29.561 00000390 6c ef bb d2 40 0a 58 1d 46 44 eb 37 4f 49 1f 69 l...@.X.FD.7OI.i 00:22:29.561 000003a0 08 29 60 a3 42 60 3b f6 d8 6f cb 45 7c ec 49 3a .)`.B`;..o.E|.I: 00:22:29.561 000003b0 8b 6e 90 ef 48 bc 52 23 c0 93 0d e5 bd 27 0f 61 .n..H.R#.....'.a 00:22:29.561 000003c0 8f 33 a0 a3 73 20 28 82 af 83 9f 5c 89 83 f1 e3 .3..s (....\.... 00:22:29.561 000003d0 50 ed 23 6d 17 20 44 93 3e fe 62 d3 a7 50 fd 8c P.#m. D.>.b..P.. 00:22:29.561 000003e0 3e 38 c5 ba 4f ad b8 81 3e 51 33 c9 65 87 6c d3 >8..O...>Q3.e.l. 00:22:29.561 000003f0 fb 3c 33 94 86 0b 75 57 97 e7 18 a2 bd 24 82 45 .<3...uW.....$.E 00:22:29.561 [2024-09-27 13:27:27.565424] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key0, hash=3, dhgroup=5, seq=3775755314, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.561 [2024-09-27 13:27:27.565798] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.561 [2024-09-27 13:27:27.653052] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.561 [2024-09-27 13:27:27.653433] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.561 [2024-09-27 13:27:27.653638] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.561 [2024-09-27 13:27:27.653864] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.561 [2024-09-27 13:27:27.818959] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.561 [2024-09-27 13:27:27.819160] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.561 [2024-09-27 13:27:27.819241] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 5 (ffdhe8192) 00:22:29.561 [2024-09-27 13:27:27.819432] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.561 [2024-09-27 13:27:27.819622] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.561 ctrlr pubkey: 00:22:29.561 00000000 54 4d 33 a4 e3 9a 81 8d ba 53 e1 7f c0 06 b3 df TM3......S...... 00:22:29.561 00000010 74 b4 65 c7 bf 9c 14 ac 98 3a 2e 82 7c 52 a5 62 t.e......:..|R.b 00:22:29.561 00000020 a5 07 7a 60 26 50 34 5c 8d fa 5a f0 4a 57 17 54 ..z`&P4\..Z.JW.T 00:22:29.561 00000030 bc fd 01 12 12 6e b0 51 ec 72 d3 9d fe 42 8b 27 .....n.Q.r...B.' 00:22:29.561 00000040 bd 88 b0 c6 53 12 19 94 89 d1 16 96 90 59 ea 93 ....S........Y.. 00:22:29.561 00000050 09 1f d5 4c b5 2b 38 f6 54 50 6d 43 2c 22 76 81 ...L.+8.TPmC,"v. 00:22:29.561 00000060 af d6 e1 c8 16 a9 fd 5f c7 ce 93 72 80 3c df 96 ......._...r.<.. 00:22:29.561 00000070 4c 91 e5 16 69 05 22 66 81 4f 5d d3 f6 c6 e7 98 L...i."f.O]..... 00:22:29.561 00000080 85 58 02 3e e7 47 8f 0f 38 75 10 04 f0 83 be b3 .X.>.G..8u...... 00:22:29.561 00000090 43 88 38 ff 7b dc 9d 05 88 fb cc d4 e0 18 42 8c C.8.{.........B. 00:22:29.561 000000a0 6b d8 08 c5 0c 09 15 8d 30 f9 2c d4 97 fc d2 3f k.......0.,....? 00:22:29.561 000000b0 57 29 4e 60 47 9a ac d5 52 fb 27 2a 70 98 e5 34 W)N`G...R.'*p..4 00:22:29.561 000000c0 36 43 ce da 64 0d 5e 21 53 c3 ef 54 9a 6f af 76 6C..d.^!S..T.o.v 00:22:29.561 000000d0 b9 d2 cc ed 2a 90 09 22 79 9e 41 b2 fa fd 23 5b ....*.."y.A...#[ 00:22:29.561 000000e0 b8 85 90 bd c1 37 5b 98 fd 59 b4 c9 4c 33 da b4 .....7[..Y..L3.. 00:22:29.561 000000f0 07 ff c6 f0 c8 88 20 50 63 4a 0f ec 5d 12 25 5c ...... PcJ..].%\ 00:22:29.561 00000100 33 d2 55 50 0e d4 c4 1e 4a ae 30 56 51 9d 26 c8 3.UP....J.0VQ.&. 00:22:29.561 00000110 71 34 f6 93 54 02 a0 05 0c 26 a8 08 8a d8 9b 62 q4..T....&.....b 00:22:29.561 00000120 30 4b 61 65 77 d3 18 9f 77 bf 91 27 4e 55 c9 7a 0Kaew...w..'NU.z 00:22:29.561 00000130 49 b0 18 7a 42 e6 6a 49 b2 31 11 62 f6 af 3c 54 I..zB.jI.1.b.. 00:22:29.561 000001a0 ba 24 3e de 3a 61 ec c7 74 d4 05 01 f3 3f 12 61 .$>.:a..t....?.a 00:22:29.561 000001b0 54 29 ae 81 94 5f 8b f3 a4 9e d0 df 6d ae 3e e5 T)..._......m.>. 00:22:29.561 000001c0 5c 68 11 a9 d3 2d b1 2b e4 23 ce e3 aa 27 2e 22 \h...-.+.#...'." 00:22:29.561 000001d0 e8 84 a6 26 78 30 f9 25 8a a1 7a fb 7c 74 5b 00 ...&x0.%..z.|t[. 00:22:29.561 000001e0 9b 54 b9 08 c2 44 cc 4b 36 0d 2f 5f 99 0c cd be .T...D.K6./_.... 00:22:29.561 000001f0 3a d9 99 97 3b 8b 07 9b 95 4c e6 c2 f1 2a bb 78 :...;....L...*.x 00:22:29.561 00000200 b4 25 08 95 58 09 ee 76 d9 17 7d 77 b7 83 e9 1a .%..X..v..}w.... 00:22:29.561 00000210 6e 5d a3 e4 c5 ab f5 bd a7 5c 17 16 8a 77 9d 75 n].......\...w.u 00:22:29.561 00000220 0f d1 26 a9 7b 20 d4 a0 12 55 84 21 2e 1f 14 06 ..&.{ ...U.!.... 00:22:29.561 00000230 94 f2 41 1b 89 65 19 3a 14 99 c9 a8 b5 9a 90 08 ..A..e.:........ 00:22:29.561 00000240 04 51 73 2f 87 70 49 83 25 a7 21 68 8c 13 4d dd .Qs/.pI.%.!h..M. 00:22:29.561 00000250 44 33 21 e4 7b fa 89 ec 22 60 8b 6c 6f 6c 50 2f D3!.{..."`.lolP/ 00:22:29.561 00000260 22 9e 37 5d c7 a1 d4 c6 26 da 4a a5 86 23 e5 32 ".7]....&.J..#.2 00:22:29.561 00000270 cd d2 cd 19 88 64 2c 72 b6 73 67 41 ed 40 86 c9 .....d,r.sgA.@.. 00:22:29.561 00000280 89 49 63 f1 fb 6c 9d 5e a9 8b b3 c3 a1 bc 58 ff .Ic..l.^......X. 00:22:29.561 00000290 21 2e 30 dc 00 05 ac 1d ee 84 bc e5 fd 56 f0 3f !.0..........V.? 00:22:29.561 000002a0 6f 1a 81 9e b9 0e 17 e9 f1 76 ef cb 3b 12 4a 98 o........v..;.J. 00:22:29.561 000002b0 53 03 23 a6 e5 42 0b de 40 44 ab d0 d0 08 57 33 S.#..B..@D....W3 00:22:29.561 000002c0 73 5c de da 1d c0 0b 13 25 84 dd 4b 18 cd fc 2b s\......%..K...+ 00:22:29.561 000002d0 a7 32 c4 e3 4f 72 c1 95 67 a6 cf ae 10 c9 15 96 .2..Or..g....... 00:22:29.561 000002e0 ae 74 a3 10 7b ac e6 cf 56 03 c4 ae f8 87 da e8 .t..{...V....... 00:22:29.561 000002f0 67 91 01 1c 19 7d ce 86 41 c2 20 5b 19 ab c9 6f g....}..A. [...o 00:22:29.561 00000300 29 ff ad 16 94 d2 25 31 8a 67 9b d8 fa 3e 91 79 ).....%1.g...>.y 00:22:29.561 00000310 66 cb 51 25 58 e9 ef 33 f1 fb b7 c4 32 09 38 f1 f.Q%X..3....2.8. 00:22:29.561 00000320 94 ef 3a b9 be ae cd 9c 40 e7 3c ca 74 af 28 15 ..:.....@.<.t.(. 00:22:29.561 00000330 df 77 5e 30 59 df 2f 74 78 ac b9 7e 4e b7 d9 63 .w^0Y./tx..~N..c 00:22:29.561 00000340 c7 d2 39 cf b1 e6 90 59 23 cb eb c2 5d 9f 66 55 ..9....Y#...].fU 00:22:29.561 00000350 e8 36 73 02 b4 50 b1 d3 f6 a4 c7 e5 d3 13 39 ec .6s..P........9. 00:22:29.561 00000360 9c 02 ef 63 9d b0 9f 8d 1d 17 2d f0 ac ea da 20 ...c......-.... 00:22:29.561 00000370 75 51 3d 2a 54 7a e2 31 f8 3c 14 e5 42 bc 8f 24 uQ=*Tz.1.<..B..$ 00:22:29.561 00000380 91 be 1c 92 5e ee c3 a8 4a bf c2 33 a4 4a 64 6a ....^...J..3.Jdj 00:22:29.561 00000390 e8 2b f0 0a dd 61 cf 1e 1e a9 c2 2a 5e 28 9f 11 .+...a.....*^(.. 00:22:29.561 000003a0 92 96 66 65 c7 b6 00 49 f2 b9 a8 2c 4e 75 be a2 ..fe...I...,Nu.. 00:22:29.561 000003b0 e2 41 28 ee d4 95 de 22 55 74 9e 73 f4 ba 5d e9 .A(...."Ut.s..]. 00:22:29.561 000003c0 fd f2 5c c4 22 12 68 a8 f9 e7 df da 1c c3 13 44 ..\.".h........D 00:22:29.561 000003d0 2b ce cc 01 f5 c9 5f 60 59 ff 54 a0 6d 73 c3 27 +....._`Y.T.ms.' 00:22:29.561 000003e0 d6 69 07 20 85 d1 06 4f 47 12 03 b7 5b 04 7f 81 .i. ...OG...[... 00:22:29.561 000003f0 69 53 3a a7 6e a7 15 29 e8 68 34 34 ac bd 4d 89 iS:.n..).h44..M. 00:22:29.561 host pubkey: 00:22:29.561 00000000 e0 1b a4 2e d7 12 92 f7 73 b3 2b c6 46 85 25 6e ........s.+.F.%n 00:22:29.561 00000010 10 66 91 93 39 88 c8 51 98 9c 25 77 7d dd 97 19 .f..9..Q..%w}... 00:22:29.561 00000020 29 88 eb 92 96 27 91 74 08 77 98 c2 51 6e bd c3 )....'.t.w..Qn.. 00:22:29.561 00000030 3b b8 85 f4 e6 b1 31 d8 04 d4 d6 ae 4f ee 82 96 ;.....1.....O... 00:22:29.561 00000040 7b c8 11 99 ee 2a c8 1f 10 3c e5 d3 e4 97 85 4a {....*...<.....J 00:22:29.561 00000050 f6 69 30 f7 78 8d 92 89 d0 e7 db 5c 79 48 f7 5d .i0.x......\yH.] 00:22:29.561 00000060 9c 1b d3 75 0d 05 87 02 e0 f7 13 df a5 15 48 d6 ...u..........H. 00:22:29.561 00000070 2e 43 5d c9 40 57 a2 5f 58 60 2e 3d cb 93 2b d7 .C].@W._X`.=..+. 00:22:29.561 00000080 b8 78 83 a8 b7 0b 19 a0 3e 82 f4 23 07 e5 0b a1 .x......>..#.... 00:22:29.561 00000090 ea 8c b0 91 83 88 9f 00 51 a8 a5 5f bd e6 af b1 ........Q.._.... 00:22:29.561 000000a0 73 0b 22 db ca 09 21 a6 26 73 80 38 2b 07 80 e9 s."...!.&s.8+... 00:22:29.561 000000b0 68 32 bb 0b 60 f5 c9 9f cd dd d3 7d a8 5e 27 6f h2..`......}.^'o 00:22:29.561 000000c0 a2 85 a1 6c 1c 04 ec b1 42 ea cf 13 08 dd a0 ec ...l....B....... 00:22:29.561 000000d0 44 b4 27 30 57 f7 07 94 be bb 48 b7 b1 a7 94 51 D.'0W.....H....Q 00:22:29.561 000000e0 c9 3e 79 79 e2 10 26 29 66 bd ea 1a 97 38 4c fa .>yy..&)f....8L. 00:22:29.561 000000f0 15 79 1a 42 03 ca 13 cc e2 b9 95 ed 04 b7 3f 76 .y.B..........?v 00:22:29.561 00000100 8b 02 70 22 28 a4 5a 76 5e fd 50 1a 19 42 4c 9d ..p"(.Zv^.P..BL. 00:22:29.561 00000110 c3 dd 17 fd 2d ae 33 e4 d3 7a 11 4e 21 e3 5f f3 ....-.3..z.N!._. 00:22:29.561 00000120 23 f6 cf 1a 57 51 3a ab 2f a2 c8 f2 d4 af 84 b3 #...WQ:./....... 00:22:29.561 00000130 b1 db 26 da 88 34 9a 84 d2 27 66 ca 13 88 ed 2a ..&..4...'f....* 00:22:29.561 00000140 2b 3c 9d 67 9f 53 74 bc 61 b1 eb 19 c5 0a 21 53 +<.g.St.a.....!S 00:22:29.561 00000150 90 bb 31 86 55 3a 42 f2 b6 66 b9 0a 58 16 fe be ..1.U:B..f..X... 00:22:29.561 00000160 a8 f0 94 c8 65 c4 eb 21 44 df 7d 6a 99 c0 cb 67 ....e..!D.}j...g 00:22:29.561 00000170 8a 2f f7 d5 a7 c4 ab ca 34 12 c4 cb 8c d7 ee 30 ./......4......0 00:22:29.561 00000180 2e 7d 7d 80 ce 22 4e 45 cc 7b 41 d3 dd 1c 1f 39 .}}.."NE.{A....9 00:22:29.561 00000190 f8 13 20 6e 59 e0 56 55 60 ed 9b 6e 82 af 03 37 .. nY.VU`..n...7 00:22:29.561 000001a0 99 e3 be 6b 40 f5 79 a2 21 78 9c ec 42 ea 5b d2 ...k@.y.!x..B.[. 00:22:29.561 000001b0 a1 f9 28 c8 d0 0a db c8 38 05 16 dc 20 99 0b e0 ..(.....8... ... 00:22:29.561 000001c0 79 62 f9 9f 6e f2 cf d1 9b d0 50 3e 08 95 f3 62 yb..n.....P>...b 00:22:29.561 000001d0 32 a5 ab b0 a4 84 9f 13 8d 57 63 9d 7a 46 7a 92 2........Wc.zFz. 00:22:29.561 000001e0 9a a2 b3 2c 2f ce 78 bc d7 db a2 a7 5e 53 f0 4b ...,/.x.....^S.K 00:22:29.561 000001f0 86 16 77 a5 14 06 88 24 0c 0c 24 aa 63 8a 9a d3 ..w....$..$.c... 00:22:29.561 00000200 05 db 8d 02 14 4d a1 63 42 b3 3a e1 0d 31 7a 22 .....M.cB.:..1z" 00:22:29.562 00000210 47 a6 93 e8 9d b6 a3 9b 6e 61 e6 9e fe 51 f3 03 G.......na...Q.. 00:22:29.562 00000220 73 93 bb c0 19 05 3a a3 df 80 0b 39 83 6a d6 4d s.....:....9.j.M 00:22:29.562 00000230 01 3d 89 c1 d3 e4 2e 51 91 a2 f5 10 e1 26 c6 cc .=.....Q.....&.. 00:22:29.562 00000240 53 ed df c8 64 2d 32 a9 e8 13 4f 4a 2e 7b 21 e3 S...d-2...OJ.{!. 00:22:29.562 00000250 65 f0 5a 68 58 3e 83 79 d9 9b 81 82 6e 6e 6f 24 e.ZhX>.y....nno$ 00:22:29.562 00000260 b5 9a 24 1f 6e 3a f7 74 04 de 63 33 0a 07 a5 8c ..$.n:.t..c3.... 00:22:29.562 00000270 bd 3d 2d 21 18 24 18 93 f5 fb 54 74 be 53 e6 3e .=-!.$....Tt.S.> 00:22:29.562 00000280 5a 5d cb 6d 82 41 be 50 bd 73 97 ad a1 59 f9 f1 Z].m.A.P.s...Y.. 00:22:29.562 00000290 49 55 cd f7 25 7a 75 47 94 5d fc f8 85 ca 86 d8 IU..%zuG.]...... 00:22:29.562 000002a0 f2 53 da b5 f1 df b1 ac 54 7e 1d 80 7f 81 6d ff .S......T~....m. 00:22:29.562 000002b0 2e e3 95 f9 43 de 8e bf 1e a5 b1 e1 d0 8d 96 72 ....C..........r 00:22:29.562 000002c0 df 79 3e e5 0e 04 3c 4e 4a 03 7e be 45 18 1c b1 .y>... 00:22:29.562 00000390 ac 2d 62 60 46 ef bf db 78 5e 68 6d b0 be 2b b6 .-b`F...x^hm..+. 00:22:29.562 000003a0 f4 6a 8c 15 16 98 1e c6 2f fc 50 ec 8d 72 cd b2 .j....../.P..r.. 00:22:29.562 000003b0 aa 1e 25 18 a1 3a d5 33 4d 48 71 00 65 7f f3 54 ..%..:.3MHq.e..T 00:22:29.562 000003c0 51 95 d2 0e 99 e2 c2 eb 4f 20 7d 02 d0 00 87 96 Q.......O }..... 00:22:29.562 000003d0 fc 7d da d6 37 73 20 93 4b b6 7f 11 ff 88 cb b5 .}..7s .K....... 00:22:29.562 000003e0 13 ff 6b 5a e9 6f 15 5d de 41 de 92 ef e8 cf 30 ..kZ.o.].A.....0 00:22:29.562 000003f0 a1 41 45 df 31 a7 cb f0 8f 82 ff 85 80 70 6e 81 .AE.1........pn. 00:22:29.562 dh secret: 00:22:29.562 00000000 9f 37 bd 65 2a 6b 29 11 4e 9e 9c 0e 25 65 90 52 .7.e*k).N...%e.R 00:22:29.562 00000010 9a 66 f3 a8 36 9c 76 b8 99 38 fc 89 9f 3d 2d 7a .f..6.v..8...=-z 00:22:29.562 00000020 e7 58 1d 75 58 26 23 d4 b4 95 1e c5 af ea 79 55 .X.uX&#.......yU 00:22:29.562 00000030 1b ee 8f 0e ea 67 05 4b 65 6f 64 d7 fb 6c 9e d1 .....g.Keod..l.. 00:22:29.562 00000040 4b 3d 30 e7 5b 22 fd 5f d3 e9 b5 66 9a f2 dc ea K=0.["._...f.... 00:22:29.562 00000050 56 03 98 d0 2b 47 4e ad e1 ce 60 f2 de f4 49 f7 V...+GN...`...I. 00:22:29.562 00000060 24 56 ae 03 3d f3 19 95 53 61 3c a4 85 10 0f ab $V..=...Sa<..... 00:22:29.562 00000070 c1 8f 86 b1 70 47 63 e3 4f a7 f0 52 de f4 9e e8 ....pGc.O..R.... 00:22:29.562 00000080 2d 1a a5 81 57 95 9c b0 28 ec ee 5b 64 e0 89 0d -...W...(..[d... 00:22:29.562 00000090 c4 70 22 09 20 da 3f 28 fc b6 2e 4b 21 24 29 eb .p". .?(...K!$). 00:22:29.562 000000a0 88 2d b1 a0 b6 71 37 cc bf 9f 6c c5 d7 df 6e 1e .-...q7...l...n. 00:22:29.562 000000b0 61 05 28 d3 57 1a 3d f4 5d a6 7c 20 7b 85 97 1b a.(.W.=.].| {... 00:22:29.562 000000c0 49 01 cb 6b 5b 43 e5 78 c3 2d ab 01 86 e9 98 c4 I..k[C.x.-...... 00:22:29.562 000000d0 da 0f ba 14 c9 1d 11 3d 2c 83 92 b9 17 69 9b b7 .......=,....i.. 00:22:29.562 000000e0 1c 4e f9 42 24 af c8 38 ae 27 0b 40 bd ad 7b 3d .N.B$..8.'.@..{= 00:22:29.562 000000f0 3d 7b 25 c0 49 e0 a9 5b 34 2d 96 63 c0 96 57 bc ={%.I..[4-.c..W. 00:22:29.562 00000100 14 88 69 64 e0 36 07 9c be dc a7 c7 af a6 c6 11 ..id.6.......... 00:22:29.562 00000110 2c bc 21 b7 cb 82 88 73 3a 89 3e 74 25 ed 71 61 ,.!....s:.>t%.qa 00:22:29.562 00000120 ca 93 22 5d be 06 36 bf 17 73 12 9a a4 a2 25 f4 .."]..6..s....%. 00:22:29.562 00000130 10 53 3d fe 02 b1 f1 94 27 05 07 98 b6 3c 81 85 .S=.....'....<.. 00:22:29.562 00000140 88 a4 a0 e6 7e 16 a4 a9 da c8 d3 88 86 49 e0 c8 ....~........I.. 00:22:29.562 00000150 42 2d 81 18 63 13 63 6b 3f 70 7d 71 d3 16 97 c4 B-..c.ck?p}q.... 00:22:29.562 00000160 ef 64 52 b1 29 54 b6 b0 15 cd 6d 1f 23 8a 3c 2b .dR.)T....m.#.<+ 00:22:29.562 00000170 09 cb bb 7f 0a 1d c3 82 64 ed 52 f0 64 c4 9b 98 ........d.R.d... 00:22:29.562 00000180 15 5c 2d dd 1b c6 36 eb b0 ac b4 58 9d e3 95 eb .\-...6....X.... 00:22:29.562 00000190 65 01 70 27 60 4e dd c0 07 d0 e4 b1 63 05 5e 79 e.p'`N......c.^y 00:22:29.562 000001a0 d6 6a 72 52 a4 e1 7a ca dc 51 b4 de 43 18 27 22 .jrR..z..Q..C.'" 00:22:29.562 000001b0 b5 f6 bb 31 16 c1 a6 d0 81 ca 72 5e 64 dd 8f 6e ...1......r^d..n 00:22:29.562 000001c0 26 10 dd d6 30 c3 52 bb 2e 65 91 3f 54 c7 46 5e &...0.R..e.?T.F^ 00:22:29.562 000001d0 d8 d7 33 e0 ca 5d 14 ab 78 82 96 3f e7 90 97 d1 ..3..]..x..?.... 00:22:29.562 000001e0 12 82 fe b5 c4 62 24 89 69 9f e2 f4 eb b9 24 3a .....b$.i.....$: 00:22:29.562 000001f0 2d e1 0b 44 a8 45 4f 3e ed 08 94 2f bd 91 d5 de -..D.EO>.../.... 00:22:29.562 00000200 52 be c1 f5 c3 33 7b a5 e6 8a 6e 11 30 04 0b 2e R....3{...n.0... 00:22:29.562 00000210 f5 c7 6d 05 1f a6 1e 59 c0 2c 5d 3c 2c 0a 03 57 ..m....Y.,]<,..W 00:22:29.562 00000220 c5 46 a8 1c 6a 2c 6e e2 21 67 80 60 71 8d 71 27 .F..j,n.!g.`q.q' 00:22:29.562 00000230 34 2a fd 0a 5f c9 6b f8 1d 0e 93 f4 75 de c6 1a 4*.._.k.....u... 00:22:29.562 00000240 cc c0 18 53 fd 78 89 01 43 b8 08 de 68 d9 f2 54 ...S.x..C...h..T 00:22:29.562 00000250 7e d1 00 fd da d1 22 e5 86 91 2a ed 68 7c 4f 69 ~....."...*.h|Oi 00:22:29.562 00000260 90 6b b3 5d 0f 84 4f 64 97 6a 45 8a 37 e1 0b 26 .k.]..Od.jE.7..& 00:22:29.562 00000270 59 45 c5 a7 5d 94 f1 ce 84 3e bc b4 9b 76 50 68 YE..]....>...vPh 00:22:29.562 00000280 4d 10 ae f9 25 02 a7 df 34 b4 20 ba 76 52 ec a8 M...%...4. .vR.. 00:22:29.562 00000290 b9 23 e3 13 3e b5 f0 65 7b 6c ae cc 49 36 69 8a .#..>..e{l..I6i. 00:22:29.562 000002a0 71 2e 66 99 86 e3 d9 47 6b 50 aa 65 1e d3 d8 31 q.f....GkP.e...1 00:22:29.562 000002b0 53 66 15 84 a2 73 4f c2 03 83 fb 3f d1 26 f4 73 Sf...sO....?.&.s 00:22:29.562 000002c0 72 9f d7 50 a4 58 0e c9 19 fd 44 33 e1 8e 15 6d r..P.X....D3...m 00:22:29.562 000002d0 5c 9e c6 8a fa 4b 93 8f bf bf f6 77 ce 48 b7 ce \....K.....w.H.. 00:22:29.562 000002e0 d4 b5 b7 16 cb bb f0 d5 9f 87 7f da d1 c1 91 e5 ................ 00:22:29.562 000002f0 50 62 98 f7 f0 2a 91 b2 dc d5 3c 29 6e 13 e5 4b Pb...*....<)n..K 00:22:29.562 00000300 cd 34 2a ae d3 a2 fa 5b e1 50 4b 04 53 12 71 25 .4*....[.PK.S.q% 00:22:29.562 00000310 31 88 b9 7a 1b 52 de db 1e 7b 29 7f 16 6e c6 e2 1..z.R...{)..n.. 00:22:29.562 00000320 1d 17 8a c0 c5 4f ec 6a cd 4d df 79 1d 7b a3 f3 .....O.j.M.y.{.. 00:22:29.562 00000330 00 09 74 87 bc da cc e0 3f 71 b0 b7 80 3b 08 d2 ..t.....?q...;.. 00:22:29.562 00000340 67 2d c7 ae 3f 7e 6d 6a 0d 65 06 08 23 7c 9c 3b g-..?~mj.e..#|.; 00:22:29.562 00000350 96 5b 14 0d a3 2c 5a 3c 3f 01 ff 03 72 29 11 a1 .[...,Z. 00:22:29.562 [2024-09-27 13:27:27.991298] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=3, dhgroup=5, seq=3775755315, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.562 [2024-09-27 13:27:27.991773] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.562 [2024-09-27 13:27:28.077956] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.562 [2024-09-27 13:27:28.078391] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.562 [2024-09-27 13:27:28.078649] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.562 [2024-09-27 13:27:28.078826] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.562 [2024-09-27 13:27:28.130816] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.562 [2024-09-27 13:27:28.131111] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.562 [2024-09-27 13:27:28.131213] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 5 (ffdhe8192) 00:22:29.562 [2024-09-27 13:27:28.131339] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.562 [2024-09-27 13:27:28.131583] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.562 ctrlr pubkey: 00:22:29.562 00000000 54 4d 33 a4 e3 9a 81 8d ba 53 e1 7f c0 06 b3 df TM3......S...... 00:22:29.562 00000010 74 b4 65 c7 bf 9c 14 ac 98 3a 2e 82 7c 52 a5 62 t.e......:..|R.b 00:22:29.562 00000020 a5 07 7a 60 26 50 34 5c 8d fa 5a f0 4a 57 17 54 ..z`&P4\..Z.JW.T 00:22:29.562 00000030 bc fd 01 12 12 6e b0 51 ec 72 d3 9d fe 42 8b 27 .....n.Q.r...B.' 00:22:29.562 00000040 bd 88 b0 c6 53 12 19 94 89 d1 16 96 90 59 ea 93 ....S........Y.. 00:22:29.562 00000050 09 1f d5 4c b5 2b 38 f6 54 50 6d 43 2c 22 76 81 ...L.+8.TPmC,"v. 00:22:29.562 00000060 af d6 e1 c8 16 a9 fd 5f c7 ce 93 72 80 3c df 96 ......._...r.<.. 00:22:29.562 00000070 4c 91 e5 16 69 05 22 66 81 4f 5d d3 f6 c6 e7 98 L...i."f.O]..... 00:22:29.562 00000080 85 58 02 3e e7 47 8f 0f 38 75 10 04 f0 83 be b3 .X.>.G..8u...... 00:22:29.562 00000090 43 88 38 ff 7b dc 9d 05 88 fb cc d4 e0 18 42 8c C.8.{.........B. 00:22:29.562 000000a0 6b d8 08 c5 0c 09 15 8d 30 f9 2c d4 97 fc d2 3f k.......0.,....? 00:22:29.562 000000b0 57 29 4e 60 47 9a ac d5 52 fb 27 2a 70 98 e5 34 W)N`G...R.'*p..4 00:22:29.562 000000c0 36 43 ce da 64 0d 5e 21 53 c3 ef 54 9a 6f af 76 6C..d.^!S..T.o.v 00:22:29.562 000000d0 b9 d2 cc ed 2a 90 09 22 79 9e 41 b2 fa fd 23 5b ....*.."y.A...#[ 00:22:29.562 000000e0 b8 85 90 bd c1 37 5b 98 fd 59 b4 c9 4c 33 da b4 .....7[..Y..L3.. 00:22:29.562 000000f0 07 ff c6 f0 c8 88 20 50 63 4a 0f ec 5d 12 25 5c ...... PcJ..].%\ 00:22:29.562 00000100 33 d2 55 50 0e d4 c4 1e 4a ae 30 56 51 9d 26 c8 3.UP....J.0VQ.&. 00:22:29.562 00000110 71 34 f6 93 54 02 a0 05 0c 26 a8 08 8a d8 9b 62 q4..T....&.....b 00:22:29.562 00000120 30 4b 61 65 77 d3 18 9f 77 bf 91 27 4e 55 c9 7a 0Kaew...w..'NU.z 00:22:29.562 00000130 49 b0 18 7a 42 e6 6a 49 b2 31 11 62 f6 af 3c 54 I..zB.jI.1.b.. 00:22:29.562 000001a0 ba 24 3e de 3a 61 ec c7 74 d4 05 01 f3 3f 12 61 .$>.:a..t....?.a 00:22:29.562 000001b0 54 29 ae 81 94 5f 8b f3 a4 9e d0 df 6d ae 3e e5 T)..._......m.>. 00:22:29.562 000001c0 5c 68 11 a9 d3 2d b1 2b e4 23 ce e3 aa 27 2e 22 \h...-.+.#...'." 00:22:29.562 000001d0 e8 84 a6 26 78 30 f9 25 8a a1 7a fb 7c 74 5b 00 ...&x0.%..z.|t[. 00:22:29.562 000001e0 9b 54 b9 08 c2 44 cc 4b 36 0d 2f 5f 99 0c cd be .T...D.K6./_.... 00:22:29.562 000001f0 3a d9 99 97 3b 8b 07 9b 95 4c e6 c2 f1 2a bb 78 :...;....L...*.x 00:22:29.562 00000200 b4 25 08 95 58 09 ee 76 d9 17 7d 77 b7 83 e9 1a .%..X..v..}w.... 00:22:29.562 00000210 6e 5d a3 e4 c5 ab f5 bd a7 5c 17 16 8a 77 9d 75 n].......\...w.u 00:22:29.562 00000220 0f d1 26 a9 7b 20 d4 a0 12 55 84 21 2e 1f 14 06 ..&.{ ...U.!.... 00:22:29.562 00000230 94 f2 41 1b 89 65 19 3a 14 99 c9 a8 b5 9a 90 08 ..A..e.:........ 00:22:29.562 00000240 04 51 73 2f 87 70 49 83 25 a7 21 68 8c 13 4d dd .Qs/.pI.%.!h..M. 00:22:29.562 00000250 44 33 21 e4 7b fa 89 ec 22 60 8b 6c 6f 6c 50 2f D3!.{..."`.lolP/ 00:22:29.562 00000260 22 9e 37 5d c7 a1 d4 c6 26 da 4a a5 86 23 e5 32 ".7]....&.J..#.2 00:22:29.562 00000270 cd d2 cd 19 88 64 2c 72 b6 73 67 41 ed 40 86 c9 .....d,r.sgA.@.. 00:22:29.562 00000280 89 49 63 f1 fb 6c 9d 5e a9 8b b3 c3 a1 bc 58 ff .Ic..l.^......X. 00:22:29.562 00000290 21 2e 30 dc 00 05 ac 1d ee 84 bc e5 fd 56 f0 3f !.0..........V.? 00:22:29.562 000002a0 6f 1a 81 9e b9 0e 17 e9 f1 76 ef cb 3b 12 4a 98 o........v..;.J. 00:22:29.562 000002b0 53 03 23 a6 e5 42 0b de 40 44 ab d0 d0 08 57 33 S.#..B..@D....W3 00:22:29.562 000002c0 73 5c de da 1d c0 0b 13 25 84 dd 4b 18 cd fc 2b s\......%..K...+ 00:22:29.562 000002d0 a7 32 c4 e3 4f 72 c1 95 67 a6 cf ae 10 c9 15 96 .2..Or..g....... 00:22:29.562 000002e0 ae 74 a3 10 7b ac e6 cf 56 03 c4 ae f8 87 da e8 .t..{...V....... 00:22:29.562 000002f0 67 91 01 1c 19 7d ce 86 41 c2 20 5b 19 ab c9 6f g....}..A. [...o 00:22:29.562 00000300 29 ff ad 16 94 d2 25 31 8a 67 9b d8 fa 3e 91 79 ).....%1.g...>.y 00:22:29.562 00000310 66 cb 51 25 58 e9 ef 33 f1 fb b7 c4 32 09 38 f1 f.Q%X..3....2.8. 00:22:29.562 00000320 94 ef 3a b9 be ae cd 9c 40 e7 3c ca 74 af 28 15 ..:.....@.<.t.(. 00:22:29.562 00000330 df 77 5e 30 59 df 2f 74 78 ac b9 7e 4e b7 d9 63 .w^0Y./tx..~N..c 00:22:29.562 00000340 c7 d2 39 cf b1 e6 90 59 23 cb eb c2 5d 9f 66 55 ..9....Y#...].fU 00:22:29.562 00000350 e8 36 73 02 b4 50 b1 d3 f6 a4 c7 e5 d3 13 39 ec .6s..P........9. 00:22:29.562 00000360 9c 02 ef 63 9d b0 9f 8d 1d 17 2d f0 ac ea da 20 ...c......-.... 00:22:29.562 00000370 75 51 3d 2a 54 7a e2 31 f8 3c 14 e5 42 bc 8f 24 uQ=*Tz.1.<..B..$ 00:22:29.562 00000380 91 be 1c 92 5e ee c3 a8 4a bf c2 33 a4 4a 64 6a ....^...J..3.Jdj 00:22:29.562 00000390 e8 2b f0 0a dd 61 cf 1e 1e a9 c2 2a 5e 28 9f 11 .+...a.....*^(.. 00:22:29.562 000003a0 92 96 66 65 c7 b6 00 49 f2 b9 a8 2c 4e 75 be a2 ..fe...I...,Nu.. 00:22:29.562 000003b0 e2 41 28 ee d4 95 de 22 55 74 9e 73 f4 ba 5d e9 .A(...."Ut.s..]. 00:22:29.562 000003c0 fd f2 5c c4 22 12 68 a8 f9 e7 df da 1c c3 13 44 ..\.".h........D 00:22:29.562 000003d0 2b ce cc 01 f5 c9 5f 60 59 ff 54 a0 6d 73 c3 27 +....._`Y.T.ms.' 00:22:29.562 000003e0 d6 69 07 20 85 d1 06 4f 47 12 03 b7 5b 04 7f 81 .i. ...OG...[... 00:22:29.562 000003f0 69 53 3a a7 6e a7 15 29 e8 68 34 34 ac bd 4d 89 iS:.n..).h44..M. 00:22:29.562 host pubkey: 00:22:29.562 00000000 6a 0c 7c 00 e7 c4 71 1f 2b 03 bf 55 26 cf 58 2e j.|...q.+..U&.X. 00:22:29.562 00000010 6a ab 0b df b1 9d fc 09 4b aa d7 9b a7 ac 4a 4e j.......K.....JN 00:22:29.562 00000020 80 4e bb 23 c6 9d d2 18 94 4d 69 78 b7 82 e7 75 .N.#.....Mix...u 00:22:29.562 00000030 d2 c6 e3 d4 9d 56 96 b0 6d 15 90 e3 89 0c 24 65 .....V..m.....$e 00:22:29.562 00000040 4b f4 5b b1 a8 4a d7 cd 40 15 a5 99 61 72 d3 a1 K.[..J..@...ar.. 00:22:29.562 00000050 5e 2e 5a f5 04 ea ff ed b7 29 b3 64 75 27 8d 88 ^.Z......).du'.. 00:22:29.562 00000060 9d 45 1f 5c 8f bb 7b 5f 36 47 35 2f 75 4d f5 97 .E.\..{_6G5/uM.. 00:22:29.562 00000070 ca f2 2a 8f a7 20 bf 1a 9a bc e8 47 69 bf b4 54 ..*.. .....Gi..T 00:22:29.562 00000080 fa 19 1d ac f2 7e b1 32 42 7c ba 48 c7 7b 30 b1 .....~.2B|.H.{0. 00:22:29.562 00000090 a1 3b f1 16 6d 20 cb 46 24 cf cc 02 0b 9a d0 37 .;..m .F$......7 00:22:29.562 000000a0 b7 3e 2b c5 ef a0 04 17 6a 37 a9 b2 5b a0 2d 82 .>+.....j7..[.-. 00:22:29.562 000000b0 92 d8 e7 31 41 97 3c 22 68 9b 0c eb 1e bd 57 2a ...1A.<"h.....W* 00:22:29.562 000000c0 73 6c ec 9b 03 6c ec 50 19 e5 5a aa 67 56 7c 81 sl...l.P..Z.gV|. 00:22:29.562 000000d0 94 f8 3c 33 10 94 80 39 d6 ca b7 fb d1 49 73 8c ..<3...9.....Is. 00:22:29.562 000000e0 de 60 34 55 21 94 60 57 7b 68 79 83 1a 96 2a 46 .`4U!.`W{hy...*F 00:22:29.562 000000f0 c4 3d 4f 2a 30 93 dd 6e bb e0 12 0d 90 30 08 0b .=O*0..n.....0.. 00:22:29.562 00000100 06 c6 10 69 02 23 f3 18 a4 dd 0c c6 1b 73 93 39 ...i.#.......s.9 00:22:29.562 00000110 14 38 4c 70 7e eb 68 95 2b 9d c0 72 81 5d df 5b .8Lp~.h.+..r.].[ 00:22:29.562 00000120 24 ae 9b 0d 16 89 e2 43 d1 e1 ff d8 19 2c 45 d1 $......C.....,E. 00:22:29.562 00000130 1a 9f d5 fd 2e b4 5c fa f1 42 e0 11 26 ca a3 26 ......\..B..&..& 00:22:29.563 00000140 87 9e 21 46 ef 9f 2f 85 5a 87 2c e0 95 7d 63 f2 ..!F../.Z.,..}c. 00:22:29.563 00000150 26 15 04 e1 80 35 10 9c 06 7b b6 81 53 00 af fb &....5...{..S... 00:22:29.563 00000160 9c 58 d3 80 b5 44 ed 63 11 e1 c7 a1 1b 06 36 25 .X...D.c......6% 00:22:29.563 00000170 21 0d ac c1 91 82 73 d0 55 06 50 d2 2c 88 76 4c !.....s.U.P.,.vL 00:22:29.563 00000180 4d 27 a6 b4 68 ce 6e 1f 22 b8 8b 2e bb f7 0e 94 M'..h.n."....... 00:22:29.563 00000190 cc 07 4c 45 c5 32 9d 7a 37 0d 85 b1 e8 1e a8 dd ..LE.2.z7....... 00:22:29.563 000001a0 d5 0a 64 6c 28 92 49 73 4f b9 03 91 09 6c 3a 92 ..dl(.IsO....l:. 00:22:29.563 000001b0 5b 88 f9 f1 be 47 45 34 28 5c 51 8b cc 73 e6 fa [....GE4(\Q..s.. 00:22:29.563 000001c0 b6 ad b8 e8 bb 85 03 85 4a 8c 53 b5 de 3d 3b f0 ........J.S..=;. 00:22:29.563 000001d0 ec c1 a0 d7 c0 7b 8d 3c e3 ff c5 e0 59 6e 88 8f .....{.<....Yn.. 00:22:29.563 000001e0 ad 8e 7e 48 9f c4 f8 21 c5 d3 de e3 a2 b6 61 17 ..~H...!......a. 00:22:29.563 000001f0 f5 e8 ce 0c 12 49 2c b4 5f 90 20 57 f0 65 18 3d .....I,._. W.e.= 00:22:29.563 00000200 2b 24 46 08 83 09 a9 07 01 60 14 a7 b9 f7 37 aa +$F......`....7. 00:22:29.563 00000210 c9 ae 3c fa c4 05 ee 63 fb 7f a4 54 32 06 05 d8 ..<....c...T2... 00:22:29.563 00000220 71 72 d4 cb a4 27 c6 2c d8 27 96 ee ea 61 54 aa qr...'.,.'...aT. 00:22:29.563 00000230 ce e5 64 09 ed 7f 06 12 86 df 12 43 0c 5a d4 e0 ..d........C.Z.. 00:22:29.563 00000240 18 d5 f5 8b b5 8c 99 1f d9 81 b8 3b 52 f3 e0 df ...........;R... 00:22:29.563 00000250 e3 e0 84 ec 97 63 7f 1f 4a 40 61 77 8a 86 a3 00 .....c..J@aw.... 00:22:29.563 00000260 f4 37 84 68 11 6f ca dd f2 de 00 3d 9a 8f 7c de .7.h.o.....=..|. 00:22:29.563 00000270 05 06 06 a3 9f 14 f6 af 27 bb 37 ae 20 4f d9 4f ........'.7. O.O 00:22:29.563 00000280 bc 5a 18 91 09 f6 3b 83 70 d5 2e 40 85 ca e9 41 .Z....;.p..@...A 00:22:29.563 00000290 aa f8 95 78 56 13 4d e8 ee 3d d0 07 38 07 e4 c8 ...xV.M..=..8... 00:22:29.563 000002a0 eb f6 aa ee e8 7b 4d c2 02 9e 63 5d 3f 86 9f 5e .....{M...c]?..^ 00:22:29.563 000002b0 11 69 51 8b d0 1b ea 70 cb f4 4b a3 58 ca 50 f3 .iQ....p..K.X.P. 00:22:29.563 000002c0 e5 aa 90 ad fc 3b 59 01 d1 28 4b e3 e8 f4 80 9e .....;Y..(K..... 00:22:29.563 000002d0 52 5f 23 b8 e0 76 10 04 e5 75 eb 79 70 87 57 6b R_#..v...u.yp.Wk 00:22:29.563 000002e0 c8 53 6b c2 e8 5e cf 36 07 84 2b d7 51 88 56 07 .Sk..^.6..+.Q.V. 00:22:29.563 000002f0 ac 9f 32 65 6c 0b de 9d 29 9e ff d9 cd a0 48 8b ..2el...).....H. 00:22:29.563 00000300 a9 6d fa ea 3f 8f bf 83 89 c6 14 85 63 c2 83 4e .m..?.......c..N 00:22:29.563 00000310 b0 42 29 d4 6b ef de f1 1b 99 8f 95 78 ef 24 d1 .B).k.......x.$. 00:22:29.563 00000320 83 38 41 7d 3c 01 7f 15 6a cb c8 41 b3 34 0b 24 .8A}<...j..A.4.$ 00:22:29.563 00000330 11 87 50 54 76 6f ba fb a7 52 74 2f 9b 0b 00 e3 ..PTvo...Rt/.... 00:22:29.563 00000340 19 96 c4 98 4e 4a 61 d6 a4 8e a1 19 7b 11 46 70 ....NJa.....{.Fp 00:22:29.563 00000350 21 6b da 29 f7 43 70 29 e4 af 34 6d 8d 10 3b 1c !k.).Cp)..4m..;. 00:22:29.563 00000360 88 0e e1 d7 ad b5 df 51 aa 9f a1 79 d6 da 4c c0 .......Q...y..L. 00:22:29.563 00000370 0f ec 61 02 c7 b3 b3 b3 e6 18 73 a8 f9 a3 8e 28 ..a.......s....( 00:22:29.563 00000380 59 cb 12 2f c2 fe 9f 86 da 97 ba 00 61 86 ec ae Y../........a... 00:22:29.563 00000390 c4 de 09 99 9a 60 35 dc 70 1e 26 5b ed 1e 30 86 .....`5.p.&[..0. 00:22:29.563 000003a0 2e fd 45 43 ed ca 8e 1b 73 f9 8b ca 28 99 16 ec ..EC....s...(... 00:22:29.563 000003b0 27 42 db 98 72 2c f8 53 10 2f 9a 18 9e bc 61 fb 'B..r,.S./....a. 00:22:29.563 000003c0 2d 1c a9 d0 7a 68 74 a9 af 1a cb 97 64 58 95 db -...zht.....dX.. 00:22:29.563 000003d0 f6 54 b1 43 c9 b9 6f ad 40 c1 cc dd 8b 5e 49 f2 .T.C..o.@....^I. 00:22:29.563 000003e0 16 f2 b6 24 13 61 1d 67 73 8f 93 19 46 2b 43 a3 ...$.a.gs...F+C. 00:22:29.563 000003f0 ef ea 77 b3 83 43 0e d6 65 1e 12 20 3b 1d d8 a7 ..w..C..e.. ;... 00:22:29.563 dh secret: 00:22:29.563 00000000 c1 1e d3 94 f9 be 37 9b 2f de 89 ce a8 3c 68 ba ......7./....... 00:22:29.563 00000090 42 1c 9e 83 00 77 4e 02 70 7b 21 3b a8 6a 14 1d B....wN.p{!;.j.. 00:22:29.563 000000a0 6e 1f 5d 1e 8b c0 fd f9 1b f3 d7 69 4c 7a ba 3c n.]........iLz.< 00:22:29.563 000000b0 21 45 de d0 e3 e8 db c5 16 71 2b a7 ec 90 d5 56 !E.......q+....V 00:22:29.563 000000c0 2a e5 46 cc eb ee 77 b7 1b a9 db 95 0e 77 03 0f *.F...w......w.. 00:22:29.563 000000d0 c0 91 a3 df df 38 ad 5c 17 a6 8e 94 c2 65 0e fc .....8.\.....e.. 00:22:29.563 000000e0 cc c3 98 10 8e 09 70 ce 68 73 d6 3b dd a3 9a 7f ......p.hs.;.... 00:22:29.563 000000f0 28 e5 17 b5 55 5c 3d 6d 65 bc 39 db 4b 2c e9 65 (...U\=me.9.K,.e 00:22:29.563 00000100 80 88 f2 f3 6b 62 14 31 fd 23 ad a4 6c 64 a0 e3 ....kb.1.#..ld.. 00:22:29.563 00000110 0c 13 7a 2e c3 06 ea 92 01 fc 8e e0 77 e4 10 0e ..z.........w... 00:22:29.563 00000120 13 f6 25 95 c6 60 fb 76 4b 16 1d cf 0b d5 33 11 ..%..`.vK.....3. 00:22:29.563 00000130 59 d7 67 32 46 86 98 f5 e8 84 69 b7 b4 3c 09 73 Y.g2F.....i..<.s 00:22:29.563 00000140 6a 53 cc fa e7 dd 2a 0a d3 0d 1e d6 3b 15 4a 4c jS....*.....;.JL 00:22:29.563 00000150 4d 2c fa ef 31 0c e2 39 7a 8b c9 91 61 93 96 7c M,..1..9z...a..| 00:22:29.563 00000160 5f e9 cc 9e 76 e0 e3 08 dc 96 b6 37 79 7e cf 23 _...v......7y~.# 00:22:29.563 00000170 40 a5 49 fd 77 f8 68 e2 cc bf 7e c0 4d bb d2 37 @.I.w.h...~.M..7 00:22:29.563 00000180 86 57 ec fa 7a 4b 7b bb 56 ab 2c c7 44 57 ef d0 .W..zK{.V.,.DW.. 00:22:29.563 00000190 42 af 08 10 0c 10 02 9a e0 6f 8f 5f 8a cc 7c 5a B........o._..|Z 00:22:29.563 000001a0 a8 f9 83 1d 53 e3 0b 83 19 cb 7b bc ab ee 36 99 ....S.....{...6. 00:22:29.563 000001b0 39 0b 32 3a 55 50 33 9c 3e 7f ca 22 28 91 3b c5 9.2:UP3.>.."(.;. 00:22:29.563 000001c0 d3 93 74 ca be 68 00 9b c9 63 cc 83 6d 7c 47 c5 ..t..h...c..m|G. 00:22:29.563 000001d0 9f 13 42 2f 49 72 b9 7e d9 b1 12 94 63 56 f6 62 ..B/Ir.~....cV.b 00:22:29.563 000001e0 1d b3 52 90 25 78 e8 72 78 16 07 71 1c c5 7d 65 ..R.%x.rx..q..}e 00:22:29.563 000001f0 ef e1 2e fa a6 20 80 2c 64 0d 99 cc 00 b9 44 6b ..... .,d.....Dk 00:22:29.563 00000200 c9 cb ee 7f e4 8b 27 25 aa 69 40 68 3f 81 57 dd ......'%.i@h?.W. 00:22:29.563 00000210 da ab b9 e6 aa 3b 2b 1d 06 e2 09 48 75 9f ea 77 .....;+....Hu..w 00:22:29.563 00000220 04 2c 7f 0f f6 28 a9 c4 2c 8d 7b 73 3f 02 a9 f7 .,...(..,.{s?... 00:22:29.563 00000230 d1 51 c7 86 58 d2 66 c8 7f 57 4e a0 a8 cd ae f4 .Q..X.f..WN..... 00:22:29.563 00000240 57 8e 86 6a f7 42 d3 06 20 6e 97 02 d1 e9 ac e9 W..j.B.. n...... 00:22:29.563 00000250 42 9b 1c be a5 b9 3f 0f bd b6 d5 f5 5e a1 45 b9 B.....?.....^.E. 00:22:29.563 00000260 d4 a8 9d d1 53 08 b7 02 dc 3f a7 9b 8a 48 ed 69 ....S....?...H.i 00:22:29.563 00000270 f8 b6 79 f1 52 3a 5d d0 5c 16 24 bf ed 62 61 54 ..y.R:].\.$..baT 00:22:29.563 00000280 a6 b2 76 7f 2a b2 08 c6 57 d5 ee ec 9f 00 41 63 ..v.*...W.....Ac 00:22:29.563 00000290 0f 4d be 47 a6 ab fb 85 5b e7 b0 72 2e 10 72 e5 .M.G....[..r..r. 00:22:29.563 000002a0 04 85 cb 29 13 34 44 5e 38 89 d3 3e 19 bb 69 d5 ...).4D^8..>..i. 00:22:29.563 000002b0 b4 fa a3 77 cc da 18 86 77 9a 2e da e1 e7 fd c0 ...w....w....... 00:22:29.563 000002c0 b1 fd 61 02 de bf b4 a2 1c 9e 05 a8 26 7e 10 f1 ..a.........&~.. 00:22:29.563 000002d0 8e 74 74 3a 76 90 d6 2e d5 45 87 1e 50 2a f5 6a .tt:v....E..P*.j 00:22:29.563 000002e0 af 1d 84 41 bb ab 64 f6 33 a3 3e 5e 94 78 1c f8 ...A..d.3.>^.x.. 00:22:29.563 000002f0 fe 28 db 41 ba 0e 70 a9 5a e1 66 03 e4 86 80 5a .(.A..p.Z.f....Z 00:22:29.563 00000300 28 bc 5b aa 89 51 9a 4c 80 e6 26 e6 9d 66 e4 10 (.[..Q.L..&..f.. 00:22:29.563 00000310 4f ba e7 51 7b e0 05 66 7b 1e 66 5a 3d 3f ff 70 O..Q{..f{.fZ=?.p 00:22:29.563 00000320 de 97 e7 a6 a9 96 73 cd 29 89 5c f3 ec 3c e8 65 ......s.).\..<.e 00:22:29.563 00000330 76 17 aa e5 27 b8 77 d3 ca 90 3d 6f d4 ac 50 7f v...'.w...=o..P. 00:22:29.563 00000340 b2 d5 b0 6c 7e 4d 7e b2 7a 45 09 8c 0c d0 2e 7f ...l~M~.zE...... 00:22:29.563 00000350 cc 33 40 6f 1b 82 c0 81 69 d9 6e 89 8c 57 ae bc .3@o....i.n..W.. 00:22:29.563 00000360 8b fc 16 e8 fc bd e5 a3 7f 7c ac fa c8 58 15 ae .........|...X.. 00:22:29.563 00000370 93 09 4a e3 6d 53 90 39 e9 c3 f8 d3 b1 af 73 a4 ..J.mS.9......s. 00:22:29.563 00000380 e1 9e 36 8d ad 23 77 4e 29 a8 39 d2 b6 da f7 fb ..6..#wN).9..... 00:22:29.563 00000390 a9 65 19 65 ec 99 f5 14 dd b1 3b 45 70 51 e5 96 .e.e......;EpQ.. 00:22:29.563 000003a0 e2 18 8c c3 bf 81 fa c5 75 bf 29 d9 98 ad 50 6b ........u.)...Pk 00:22:29.563 000003b0 ff e9 70 e5 08 2d f8 34 b7 a1 f6 aa 02 d3 af 23 ..p..-.4.......# 00:22:29.563 000003c0 a7 a4 c7 71 c5 5f a6 80 fe 7b 60 6a 43 65 36 48 ...q._...{`jCe6H 00:22:29.563 000003d0 52 1f d0 22 37 24 8c 6f 31 35 0d a7 2e f9 0e 35 R.."7$.o15.....5 00:22:29.563 000003e0 f9 e9 49 9b 17 6d 51 4f 83 0a 5b c0 67 c8 18 96 ..I..mQO..[.g... 00:22:29.563 000003f0 89 ad 96 24 50 2c 30 27 84 df b4 0c ea c5 97 56 ...$P,0'.......V 00:22:29.563 [2024-09-27 13:27:28.300122] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key1, hash=3, dhgroup=5, seq=3775755316, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.563 [2024-09-27 13:27:28.300497] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.563 [2024-09-27 13:27:28.387240] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.563 [2024-09-27 13:27:28.387805] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.563 [2024-09-27 13:27:28.387931] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.563 [2024-09-27 13:27:28.388175] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.563 [2024-09-27 13:27:28.548583] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.563 [2024-09-27 13:27:28.548870] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.563 [2024-09-27 13:27:28.548982] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 5 (ffdhe8192) 00:22:29.563 [2024-09-27 13:27:28.549081] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.563 [2024-09-27 13:27:28.549324] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.563 ctrlr pubkey: 00:22:29.563 00000000 fc a9 25 87 ca ab 1d 2a 72 27 61 7d 4c 27 8e df ..%....*r'a}L'.. 00:22:29.563 00000010 9d 20 f5 7c 83 bb 1a 13 a2 ad a3 87 70 40 1a d9 . .|........p@.. 00:22:29.563 00000020 50 db 5a 10 9f dc 36 ae c0 68 6e 5a af 22 30 ea P.Z...6..hnZ."0. 00:22:29.563 00000030 12 5a 31 c3 6a af 4e 2e 3e 14 ce 5e 04 d7 d5 85 .Z1.j.N.>..^.... 00:22:29.563 00000040 9a d9 a5 20 23 8b 46 4f 20 1e 1d 50 bb 42 9d f0 ... #.FO ..P.B.. 00:22:29.563 00000050 bc d3 5f 11 ac e7 99 e0 e0 6c 09 33 7c 57 98 35 .._......l.3|W.5 00:22:29.563 00000060 a6 c3 9b a8 e9 0f d6 46 d4 8f 32 c8 ff 58 cb 59 .......F..2..X.Y 00:22:29.563 00000070 41 12 48 c5 d5 62 cc 52 70 66 c8 91 54 2f 8d 8e A.H..b.Rpf..T/.. 00:22:29.563 00000080 c2 57 ef 63 41 00 07 c7 87 23 b3 bd bc 20 aa b9 .W.cA....#... .. 00:22:29.563 00000090 30 6a 29 46 5e 39 a2 5b 00 fd 25 c1 a8 d8 85 02 0j)F^9.[..%..... 00:22:29.563 000000a0 e8 8b 77 fb 00 b4 ab b6 93 22 ee d1 16 6b 3e b3 ..w......"...k>. 00:22:29.563 000000b0 56 fd 24 79 c7 1f b9 a7 2e a5 1a d0 78 b0 e1 c0 V.$y........x... 00:22:29.563 000000c0 17 b5 0c 28 fe 08 99 f5 74 55 2d 94 3a 0f 96 ab ...(....tU-.:... 00:22:29.563 000000d0 d0 c1 e8 97 4d 07 3b d7 af b2 c2 19 ea b2 35 1a ....M.;.......5. 00:22:29.563 000000e0 32 d8 d7 50 b1 1f d9 f8 84 a8 7a f9 ba ca 1e ec 2..P......z..... 00:22:29.563 000000f0 aa 3a 9f 30 45 59 79 91 a6 fc 0c 39 f5 bf 9b 58 .:.0EYy....9...X 00:22:29.563 00000100 27 61 c9 f7 70 f9 f8 f5 8b b4 f9 0d d6 34 ef 38 'a..p........4.8 00:22:29.563 00000110 da 69 08 1b b9 cc 8c 9b 29 6a 95 cc 53 25 e3 7c .i......)j..S%.| 00:22:29.563 00000120 2c ed f1 0a c8 68 2f 8b 87 c9 00 0b ad 8d 52 90 ,....h/.......R. 00:22:29.563 00000130 08 1a 3c 50 94 b1 65 27 19 d1 32 aa d5 b1 4e f6 ..*8aF.Y..c. 00:22:29.563 00000160 c5 a3 42 ef 79 40 9e 37 0f bc f2 dd 9b f3 6c c0 ..B.y@.7......l. 00:22:29.563 00000170 7d 0f ce 4e 33 b2 50 99 de d9 a5 f1 73 80 37 02 }..N3.P.....s.7. 00:22:29.563 00000180 b0 53 d8 b0 8e 65 89 ab 78 40 80 07 75 61 63 ad .S...e..x@..uac. 00:22:29.563 00000190 48 b0 f0 d4 52 a0 17 52 a8 17 7a d2 58 64 85 b2 H...R..R..z.Xd.. 00:22:29.563 000001a0 1f 28 2c c3 fb 19 54 b1 34 af ac 09 8b 82 45 7a .(,...T.4.....Ez 00:22:29.563 000001b0 69 8d 89 52 a2 72 b6 bc 7b 6f dc 84 22 ac d1 5b i..R.r..{o.."..[ 00:22:29.563 000001c0 ea 0a 89 85 54 31 bc 64 b2 dd 9a e9 3d 11 a2 aa ....T1.d....=... 00:22:29.563 000001d0 ba e4 c0 71 a2 6e f7 4e 11 07 2c 7d ee 0b c3 9c ...q.n.N..,}.... 00:22:29.563 000001e0 67 1e c7 7c db 61 c2 ae 69 2a 20 b7 0c 5c 3d cb g..|.a..i* ..\=. 00:22:29.563 000001f0 ed 0d 46 7a 05 84 85 e4 29 18 c3 9c 6e 10 a3 30 ..Fz....)...n..0 00:22:29.563 00000200 84 f3 12 d3 d1 43 55 da 69 2c 6e 2f 74 fa 29 f1 .....CU.i,n/t.). 00:22:29.563 00000210 2f 01 08 37 9b 78 76 c5 23 39 a7 a2 77 fe 1c 9f /..7.xv.#9..w... 00:22:29.563 00000220 37 a1 35 03 e4 57 3a 9e d6 03 41 26 ea 9b 02 00 7.5..W:...A&.... 00:22:29.564 00000230 8b 18 63 57 84 68 fb 02 fe bf a6 b3 5e d5 ad 3e ..cW.h......^..> 00:22:29.564 00000240 af 0b d6 03 09 64 93 00 c2 9a d8 58 c7 d0 56 09 .....d.....X..V. 00:22:29.564 00000250 73 ef 6c 1b e7 7d 29 c7 00 8b 3c 08 20 77 11 4b s.l..})...<. w.K 00:22:29.564 00000260 37 7b 4c 59 1e c9 ca 1a 19 8c 09 9c 48 65 24 9c 7{LY........He$. 00:22:29.564 00000270 b9 54 21 20 36 dc 86 eb 99 cb e8 4d c9 0d 51 23 .T! 6......M..Q# 00:22:29.564 00000280 12 60 e4 09 ce dc e9 7f 79 70 87 ac fe a2 ef 0c .`......yp...... 00:22:29.564 00000290 a8 5c f9 d4 cf 49 29 77 f0 74 a1 d8 bb ad b6 4f .\...I)w.t.....O 00:22:29.564 000002a0 bd 89 81 5b 5e f0 31 b4 48 2b 4b 58 db e8 7c b4 ...[^.1.H+KX..|. 00:22:29.564 000002b0 58 70 bd 89 e5 74 58 99 00 e9 f8 10 64 7b fe 61 Xp...tX.....d{.a 00:22:29.564 000002c0 ea 31 41 b2 29 cb 85 55 e8 56 09 68 1c 2e e3 2d .1A.)..U.V.h...- 00:22:29.564 000002d0 61 03 52 61 dd 38 f6 76 8d f7 6e 3c d1 cf a3 41 a.Ra.8.v..n<...A 00:22:29.564 000002e0 01 6c 1e 46 66 39 78 ff 0f 3c 65 5b df 62 96 84 .l.Ff9x........Am..U[.. 00:22:29.564 00000350 46 e5 63 52 62 40 de 84 9c e2 18 ec da 5c 3c 32 F.cRb@.......\<2 00:22:29.564 00000360 60 9d 3d 69 a1 2f 9c 84 12 22 ef 2e 31 d6 f2 0d `.=i./..."..1... 00:22:29.564 00000370 a0 06 8a 76 34 8e c4 de a2 69 54 cb 7b 5f ff 1e ...v4....iT.{_.. 00:22:29.564 00000380 33 2d 05 3e 0f 56 2b a5 ad 24 fb 9c ff 6d dc a0 3-.>.V+..$...m.. 00:22:29.564 00000390 c1 0f d6 a7 14 e5 c4 ce b7 90 66 77 2f cd f7 84 ..........fw/... 00:22:29.564 000003a0 ef 52 ee b1 ef e9 44 85 de 71 84 0d 23 01 c4 3e .R....D..q..#..> 00:22:29.564 000003b0 e7 bc 1b 8d 7b 05 9d db 43 40 26 43 2e 26 02 af ....{...C@&C.&.. 00:22:29.564 000003c0 bc ab 45 c9 69 95 89 f7 0d a8 39 ea 59 ac c7 ea ..E.i.....9.Y... 00:22:29.564 000003d0 31 74 fe a7 7e 5c e3 2c 7c 25 42 86 07 69 b0 90 1t..~\.,|%B..i.. 00:22:29.564 000003e0 94 23 fc 70 94 16 af ec ca 62 cc 4f 93 ae 50 71 .#.p.....b.O..Pq 00:22:29.564 000003f0 d9 29 4b 64 6c 3a cb ba 4d 18 ea 08 98 e9 22 99 .)Kdl:..M.....". 00:22:29.564 host pubkey: 00:22:29.564 00000000 b3 f4 31 a8 a7 13 68 0a 99 74 90 20 44 d3 00 d6 ..1...h..t. D... 00:22:29.564 00000010 da 97 f8 ed cd 31 66 b4 e2 b4 29 f7 50 c5 47 2f .....1f...).P.G/ 00:22:29.564 00000020 39 df 07 90 6b 1f a3 33 3a 01 95 51 c9 3b 29 73 9...k..3:..Q.;)s 00:22:29.564 00000030 bd 06 42 83 5d dc c3 7c e5 75 96 ce ad c6 4d 32 ..B.]..|.u....M2 00:22:29.564 00000040 7f 80 11 1d f4 1e 75 cc 79 7e 33 14 50 b8 fb b9 ......u.y~3.P... 00:22:29.564 00000050 20 15 79 2a 6a bf 82 da e4 c1 15 9c 55 e5 3a 6a .y*j.......U.:j 00:22:29.564 00000060 42 9c a3 a8 ce b9 a2 a3 93 c1 b0 d7 ad 60 fb 73 B............`.s 00:22:29.564 00000070 ef 2f 11 ee d1 71 ec 24 ce 16 8f a7 39 57 33 15 ./...q.$....9W3. 00:22:29.564 00000080 66 30 21 98 a0 c0 da aa 0f ee 51 4d 0b 27 27 67 f0!.......QM.''g 00:22:29.564 00000090 ad e7 2d c4 e4 a7 3e 31 f1 17 28 63 16 d1 81 a2 ..-...>1..(c.... 00:22:29.564 000000a0 54 6a c9 a2 4d f2 f7 d6 5d 85 fa 5e ed 9f 9b cd Tj..M...]..^.... 00:22:29.564 000000b0 01 05 09 57 41 e9 b3 8b 5c e4 0f 9d ed e5 3d b1 ...WA...\.....=. 00:22:29.564 000000c0 51 19 29 3e 7f d5 89 89 58 64 f6 6b 0c 14 f4 3c Q.)>....Xd.k...< 00:22:29.564 000000d0 99 7b b5 ee 7f 7c e8 d1 11 7a 47 bd 1a 7b 60 5e .{...|...zG..{`^ 00:22:29.564 000000e0 4e ab 33 07 0c e9 e4 d9 90 1d ad e6 c1 5d ae e5 N.3..........].. 00:22:29.564 000000f0 04 8d a5 c2 87 ae b5 0f 7d ce f4 36 56 ec 6c 47 ........}..6V.lG 00:22:29.564 00000100 c2 2c 7d 7c 90 97 4f 5b 05 81 3d 3a 62 dc 7c 7f .,}|..O[..=:b.|. 00:22:29.564 00000110 ac 84 49 af 0b 53 12 1d ca ca bd 20 78 cf 36 54 ..I..S..... x.6T 00:22:29.564 00000120 ed 33 f8 1c 98 fb 84 24 d6 6f 15 e3 95 06 ff 5a .3.....$.o.....Z 00:22:29.564 00000130 95 9e 0d ea 1e 33 f8 71 b8 6a 88 a1 7f cd 12 ec .....3.q.j...... 00:22:29.564 00000140 ef 6d 87 12 bf 19 0e 67 b4 da 7c 9a 01 4b 60 ec .m.....g..|..K`. 00:22:29.564 00000150 04 d0 0e cd b5 4c 9e 69 28 f1 ac 2a ba 12 12 f4 .....L.i(..*.... 00:22:29.564 00000160 c1 16 c5 9e 6a 51 a4 a0 d0 1c 11 61 e7 df d4 a3 ....jQ.....a.... 00:22:29.564 00000170 ed 9f ce 90 e7 7d 39 f9 2b fd eb aa 46 a0 2b 90 .....}9.+...F.+. 00:22:29.564 00000180 05 b6 6d 55 1f 59 53 35 6b 12 91 04 3b f6 97 66 ..mU.YS5k...;..f 00:22:29.564 00000190 e6 eb 9f 71 4a 0b 8f fa b3 51 98 3e c1 45 fd 74 ...qJ....Q.>.E.t 00:22:29.564 000001a0 a1 68 8b 05 40 a8 3e 22 b8 da d4 da 29 d0 ea 10 .h..@.>"....)... 00:22:29.564 000001b0 c2 a4 bf 00 79 bd b8 29 85 0e 79 13 4e a4 52 08 ....y..)..y.N.R. 00:22:29.564 000001c0 09 1c db 21 d1 02 14 fd 8e 8a 8c 35 03 00 af c5 ...!.......5.... 00:22:29.564 000001d0 84 ec c5 8c 76 74 ba db fc 36 4e 14 32 3f bc f4 ....vt...6N.2?.. 00:22:29.564 000001e0 7b 4c e2 b6 f5 be 56 64 8c 0d 21 2f 80 a6 c8 b6 {L....Vd..!/.... 00:22:29.564 000001f0 d4 38 28 66 2e 83 65 80 93 65 22 64 d9 e3 a2 8a .8(f..e..e"d.... 00:22:29.564 00000200 9f 7a 85 3e 38 6b b7 02 a4 c4 18 ac a9 9f 4e 64 .z.>8k........Nd 00:22:29.564 00000210 20 a4 11 60 0a 54 a8 bc 97 55 9f 91 ac 56 4a 09 ..`.T...U...VJ. 00:22:29.564 00000220 14 8a 28 eb b7 67 3a 15 64 ab a6 8d 14 cd 45 ef ..(..g:.d.....E. 00:22:29.564 00000230 7e c7 a9 01 8c 88 17 33 da 43 3b 0d e0 4a 81 f5 ~......3.C;..J.. 00:22:29.564 00000240 21 45 fa 52 3f 0a d9 30 1e d0 0f bb a0 5f be dc !E.R?..0....._.. 00:22:29.564 00000250 ba e2 b6 bf 4d 94 a7 d2 af 62 2d 1a 8d 16 f1 ed ....M....b-..... 00:22:29.564 00000260 40 c1 37 01 7a cf f0 94 36 d4 14 5f 7a 1c db 79 @.7.z...6.._z..y 00:22:29.564 00000270 25 27 6f 36 a6 85 d1 ae 24 df fc 0d 52 85 1a 69 %'o6....$...R..i 00:22:29.564 00000280 4c 94 ce fb 65 2a e5 ba 91 a9 69 fe 0c e5 fb 8e L...e*....i..... 00:22:29.564 00000290 7a 48 bd e0 0d 70 97 19 76 5e 82 0e f4 52 45 0a zH...p..v^...RE. 00:22:29.564 000002a0 c1 ed d4 33 9f 9d 88 df db 27 6c 09 86 f8 6b cb ...3.....'l...k. 00:22:29.564 000002b0 88 db 84 d2 94 41 07 98 82 53 0b ee 87 34 86 88 .....A...S...4.. 00:22:29.564 000002c0 ae c5 25 7b 05 f2 2c 15 78 00 c0 9f a9 5f 8a 39 ..%{..,.x...._.9 00:22:29.564 000002d0 b4 cd 62 d8 ea d0 2f be be 84 89 94 e8 8f c9 91 ..b.../......... 00:22:29.564 000002e0 aa 4d 69 7b cb 0a 64 93 17 03 b6 0e 31 66 94 ea .Mi{..d.....1f.. 00:22:29.564 000002f0 07 63 51 3d 54 d6 1f 4e 09 a5 a5 5f 42 a1 c9 40 .cQ=T..N..._B..@ 00:22:29.564 00000300 0d 24 bf bf 2c 03 f7 3b cb 10 af 61 e8 90 ab ec .$..,..;...a.... 00:22:29.564 00000310 b5 b3 25 2e 69 f2 d5 ad 5e 93 b9 b3 6c b4 22 99 ..%.i...^...l.". 00:22:29.564 00000320 ce 18 d8 6e 77 33 18 ae 93 35 ee 09 6d f9 0a 9f ...nw3...5..m... 00:22:29.564 00000330 88 69 55 d9 66 e1 e2 9e 0e 3d 0c ed b5 8c 19 6b .iU.f....=.....k 00:22:29.564 00000340 84 2a 81 a5 27 73 09 5e c0 46 4b f9 1f 04 ff a1 .*..'s.^.FK..... 00:22:29.564 00000350 49 6d 57 6d 78 07 e4 0a a9 69 05 f4 60 76 e6 0d ImWmx....i..`v.. 00:22:29.564 00000360 da c4 9f ea 3c 8c 6c 86 8d e2 86 24 3d d9 f3 e5 ....<.l....$=... 00:22:29.564 00000370 7a 5c d5 dc b7 a1 fa c0 d0 8c 44 e6 bb 1f 7e c6 z\........D...~. 00:22:29.564 00000380 aa e7 2e 28 f8 a1 85 f0 d3 c9 9d 87 31 85 32 55 ...(........1.2U 00:22:29.564 00000390 76 4a 5a 47 0a 10 89 5d fa 2b 61 35 95 52 25 8c vJZG...].+a5.R%. 00:22:29.564 000003a0 0a 29 58 08 db e4 c2 30 10 e0 93 ec c5 fd ba 03 .)X....0........ 00:22:29.564 000003b0 c1 d6 54 4a 33 94 fa 0e c1 b1 64 cb 6c 96 f5 37 ..TJ3.....d.l..7 00:22:29.564 000003c0 cf a7 e4 cf 19 97 2e cf 65 81 e1 9b 57 8c df e4 ........e...W... 00:22:29.564 000003d0 c5 38 0c b6 c4 5a 3d 1e 37 a8 87 64 82 0d 6c f0 .8...Z=.7..d..l. 00:22:29.564 000003e0 aa 56 cc 5d 1f 74 1f da 71 32 39 c8 8a 33 aa c5 .V.].t..q29..3.. 00:22:29.564 000003f0 a7 af f5 3c cf 3c 5c 37 d1 aa 37 9f 51 62 cb 9f ...<.<\7..7.Qb.. 00:22:29.564 dh secret: 00:22:29.564 00000000 6f a5 b4 f4 7f 6d 27 83 07 7d 22 56 99 ef e2 f0 o....m'..}"V.... 00:22:29.564 00000010 c8 e3 1a 4d 27 18 42 84 d7 7b 31 00 ca c1 64 fe ...M'.B..{1...d. 00:22:29.564 00000020 c9 40 3f 04 18 df 3f 9d f8 19 55 be 8d 99 2f e0 .@?...?...U.../. 00:22:29.564 00000030 98 b1 46 63 e0 8c c9 30 cb 7f 55 94 c7 9e 04 17 ..Fc...0..U..... 00:22:29.564 00000040 e5 43 ee 59 6f c3 ee 57 61 28 77 19 9d 2a 70 39 .C.Yo..Wa(w..*p9 00:22:29.564 00000050 f7 d2 a6 9d f1 a8 45 3e 48 02 98 26 28 5c 69 43 ......E>H..&(\iC 00:22:29.564 00000060 96 c8 10 6f ca be 90 bb fd 38 ec 8e ae 18 d9 98 ...o.....8...... 00:22:29.564 00000070 cc 7d f7 5a 4d 13 a7 4b bb 94 7e d4 12 4a eb 73 .}.ZM..K..~..J.s 00:22:29.564 00000080 7c e0 b6 fd 5e 8a 9f d2 57 de f3 37 c8 6d c8 89 |...^...W..7.m.. 00:22:29.564 00000090 fd b3 74 b0 47 e4 bb 78 b1 53 03 e8 c2 af f2 db ..t.G..x.S...... 00:22:29.564 000000a0 e7 1a 6c 21 ad 29 f1 1b 29 c9 f3 f5 a5 e0 36 dd ..l!.)..).....6. 00:22:29.564 000000b0 ab b5 96 57 f2 10 d1 32 c5 6f 2b 46 27 75 0b 69 ...W...2.o+F'u.i 00:22:29.564 000000c0 d1 a7 85 b7 07 6c b3 ac 33 3b 0c 50 5e 36 1f 29 .....l..3;.P^6.) 00:22:29.564 000000d0 40 6d 89 83 e9 6c 12 ba a9 90 7e 5c 37 26 37 84 @m...l....~\7&7. 00:22:29.564 000000e0 f7 7f df ca 26 f1 21 68 b4 52 04 e6 bd 6c fd 68 ....&.!h.R...l.h 00:22:29.564 000000f0 16 d7 5d e9 c8 79 f4 74 f1 7a 33 e6 85 a4 76 2e ..]..y.t.z3...v. 00:22:29.564 00000100 5a 4a af 1a 30 0d 9d 19 22 2c 2b 3a 4b ec a5 28 ZJ..0...",+:K..( 00:22:29.564 00000110 94 f9 52 4a 27 c4 46 21 de f9 2d a2 25 43 23 24 ..RJ'.F!..-.%C#$ 00:22:29.564 00000120 16 11 7d ef 4c 37 95 b7 65 a7 bd fa 11 57 2d 3c ..}.L7..e....W-< 00:22:29.564 00000130 01 ff 13 03 09 00 87 23 30 a8 08 d6 90 33 aa d8 .......#0....3.. 00:22:29.564 00000140 04 af c8 81 13 61 94 b0 9d 41 8e 55 e8 bd 42 43 .....a...A.U..BC 00:22:29.564 00000150 e2 61 14 5f db cd 9c c6 e3 71 bd 26 22 8f c0 fc .a._.....q.&"... 00:22:29.564 00000160 7b a9 cb 04 15 cf e1 70 f2 3b cc 64 49 8b cc 36 {......p.;.dI..6 00:22:29.564 00000170 be 7b 3b af 6c 68 61 40 c0 6b 7d be ce ed c2 02 .{;.lha@.k}..... 00:22:29.564 00000180 53 1e 50 a6 18 18 27 9d 37 a6 98 ba a8 f6 49 92 S.P...'.7.....I. 00:22:29.564 00000190 ab bf c0 f1 b5 c0 e8 bb 9b 18 46 d8 cf 5e 51 a4 ..........F..^Q. 00:22:29.564 000001a0 0a 30 9d 50 8b b4 d0 93 10 4a c8 48 82 68 83 1b .0.P.....J.H.h.. 00:22:29.564 000001b0 65 1e 5b df 88 77 ab b7 ec 2a 91 b6 3d 60 0f 33 e.[..w...*..=`.3 00:22:29.564 000001c0 af d6 3c b1 b5 5e 65 eb 00 37 73 7a 1c c2 4b 94 ..<..^e..7sz..K. 00:22:29.564 000001d0 0d 02 7f 64 45 03 0a 07 40 ab 0c 57 d9 05 29 b9 ...dE...@..W..). 00:22:29.564 000001e0 ab a2 cc a6 c0 83 ea d6 ca a1 97 ec 34 a7 c4 48 ............4..H 00:22:29.564 000001f0 20 1e 5a 7b 53 1c ee e2 2b 49 2c e1 2a 88 91 4e .Z{S...+I,.*..N 00:22:29.564 00000200 a9 54 3c 03 55 ab 3e 66 33 78 b8 e9 d9 5b c0 15 .T<.U.>f3x...[.. 00:22:29.564 00000210 aa d5 6e 47 6f ef 32 4b 18 c6 da 46 01 92 be 42 ..nGo.2K...F...B 00:22:29.564 00000220 bd 44 ff af 1f 8a 77 5e a7 af 11 8e 20 c9 e3 de .D....w^.... ... 00:22:29.564 00000230 00 c6 32 05 41 a8 c9 10 1b 5e f2 b6 5c 97 fe 70 ..2.A....^..\..p 00:22:29.564 00000240 9c b7 36 92 e7 15 14 7a af 3d 11 70 39 88 de cd ..6....z.=.p9... 00:22:29.564 00000250 a1 1d 60 f0 0f 40 e7 f9 5a 40 57 e4 d1 d4 49 f5 ..`..@..Z@W...I. 00:22:29.564 00000260 4a 21 82 84 10 c5 6c 23 af cd b8 1a 01 33 a1 fd J!....l#.....3.. 00:22:29.564 00000270 6a cf 2a 49 fd a1 72 0b aa f8 e6 e4 9b c1 18 a3 j.*I..r......... 00:22:29.564 00000280 73 72 68 60 a9 19 94 ee c6 55 ca ab d1 50 b6 7e srh`.....U...P.~ 00:22:29.564 00000290 ff 54 ab 4b 20 bb 52 7e b9 30 da ed 97 e9 d5 1e .T.K .R~.0...... 00:22:29.564 000002a0 22 be 42 8c 8d 30 e6 fc 30 f2 bd 54 dd 59 6f fb ".B..0..0..T.Yo. 00:22:29.564 000002b0 2e 44 93 c0 31 7d 09 7f 0d 9a 7c 4f 9f e4 03 c9 .D..1}....|O.... 00:22:29.564 000002c0 35 b7 5f 40 b7 8f a6 52 e3 32 02 1e 2e 90 e5 0f 5._@...R.2...... 00:22:29.564 000002d0 5e 8a b5 71 96 38 25 65 cb a6 8c 9b bc 06 f0 08 ^..q.8%e........ 00:22:29.564 000002e0 a9 0c 99 0d 20 a1 c4 73 18 47 21 f6 d6 f9 56 0c .... ..s.G!...V. 00:22:29.564 000002f0 52 4a 0b b4 02 9a 0b d8 12 a6 37 43 95 ec 1c 6f RJ........7C...o 00:22:29.564 00000300 5f 22 c8 71 1f 76 79 c3 62 e2 7c 99 df 93 dc 1c _".q.vy.b.|..... 00:22:29.564 00000310 aa 43 c9 af f8 bd 0c 7f 52 95 fd a3 76 6e 53 59 .C......R...vnSY 00:22:29.564 00000320 63 02 8c c7 d1 16 d4 c3 d0 a2 8e 0f 00 51 89 b2 c............Q.. 00:22:29.564 00000330 e5 ae dc 97 ea 36 e8 3e 57 96 ad 8d 75 7b 4b 37 .....6.>W...u{K7 00:22:29.564 00000340 ea 1d f9 d2 d6 79 e7 9f bc da 4d a8 21 33 59 0c .....y....M.!3Y. 00:22:29.564 00000350 ce 5f ae 44 fc 9d 19 42 29 57 cb 8d 5d 24 02 39 ._.D...B)W..]$.9 00:22:29.564 00000360 fc fc b7 07 a6 e6 22 11 3a 56 d5 9d e6 82 45 0e ......".:V....E. 00:22:29.564 00000370 f5 16 6f 42 84 0d 37 1c 36 92 b6 cc 98 85 38 20 ..oB..7.6.....8 00:22:29.564 00000380 26 de e1 4b 93 ab 09 f8 14 f2 d0 3e e8 e6 36 f5 &..K.......>..6. 00:22:29.564 00000390 da 6b 3f 73 27 f1 a1 2c d0 e2 9b e6 5e 40 03 45 .k?s'..,....^@.E 00:22:29.564 000003a0 49 b4 a6 5f cd a1 58 b7 e9 12 86 4e 17 1f c5 28 I.._..X....N...( 00:22:29.564 000003b0 05 53 16 90 09 17 18 57 06 d6 94 9a af 7f 9f e3 .S.....W........ 00:22:29.564 000003c0 a6 38 c9 da 99 c7 8b 39 51 c4 e1 97 6a de 37 f1 .8.....9Q...j.7. 00:22:29.564 000003d0 42 76 1f ce f0 8a a3 0e cf ed 34 69 96 fe 31 3d Bv........4i..1= 00:22:29.564 000003e0 4f 37 a0 6f bb 16 8b a5 10 ca bb b0 c4 c7 fc 04 O7.o............ 00:22:29.564 000003f0 22 ec 2f 9f f2 9c e2 1d c3 29 4e ef 4c a7 e9 b0 "./......)N.L... 00:22:29.564 [2024-09-27 13:27:28.721855] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=3, dhgroup=5, seq=3775755317, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.564 [2024-09-27 13:27:28.722198] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.564 [2024-09-27 13:27:28.807931] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.564 [2024-09-27 13:27:28.808463] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.564 [2024-09-27 13:27:28.808613] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.564 [2024-09-27 13:27:28.808881] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.564 [2024-09-27 13:27:28.860588] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.564 [2024-09-27 13:27:28.860871] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.564 [2024-09-27 13:27:28.861084] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 5 (ffdhe8192) 00:22:29.564 [2024-09-27 13:27:28.861187] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.564 [2024-09-27 13:27:28.861412] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.564 ctrlr pubkey: 00:22:29.564 00000000 fc a9 25 87 ca ab 1d 2a 72 27 61 7d 4c 27 8e df ..%....*r'a}L'.. 00:22:29.564 00000010 9d 20 f5 7c 83 bb 1a 13 a2 ad a3 87 70 40 1a d9 . .|........p@.. 00:22:29.564 00000020 50 db 5a 10 9f dc 36 ae c0 68 6e 5a af 22 30 ea P.Z...6..hnZ."0. 00:22:29.564 00000030 12 5a 31 c3 6a af 4e 2e 3e 14 ce 5e 04 d7 d5 85 .Z1.j.N.>..^.... 00:22:29.564 00000040 9a d9 a5 20 23 8b 46 4f 20 1e 1d 50 bb 42 9d f0 ... #.FO ..P.B.. 00:22:29.564 00000050 bc d3 5f 11 ac e7 99 e0 e0 6c 09 33 7c 57 98 35 .._......l.3|W.5 00:22:29.564 00000060 a6 c3 9b a8 e9 0f d6 46 d4 8f 32 c8 ff 58 cb 59 .......F..2..X.Y 00:22:29.564 00000070 41 12 48 c5 d5 62 cc 52 70 66 c8 91 54 2f 8d 8e A.H..b.Rpf..T/.. 00:22:29.564 00000080 c2 57 ef 63 41 00 07 c7 87 23 b3 bd bc 20 aa b9 .W.cA....#... .. 00:22:29.564 00000090 30 6a 29 46 5e 39 a2 5b 00 fd 25 c1 a8 d8 85 02 0j)F^9.[..%..... 00:22:29.564 000000a0 e8 8b 77 fb 00 b4 ab b6 93 22 ee d1 16 6b 3e b3 ..w......"...k>. 00:22:29.565 000000b0 56 fd 24 79 c7 1f b9 a7 2e a5 1a d0 78 b0 e1 c0 V.$y........x... 00:22:29.565 000000c0 17 b5 0c 28 fe 08 99 f5 74 55 2d 94 3a 0f 96 ab ...(....tU-.:... 00:22:29.565 000000d0 d0 c1 e8 97 4d 07 3b d7 af b2 c2 19 ea b2 35 1a ....M.;.......5. 00:22:29.565 000000e0 32 d8 d7 50 b1 1f d9 f8 84 a8 7a f9 ba ca 1e ec 2..P......z..... 00:22:29.565 000000f0 aa 3a 9f 30 45 59 79 91 a6 fc 0c 39 f5 bf 9b 58 .:.0EYy....9...X 00:22:29.565 00000100 27 61 c9 f7 70 f9 f8 f5 8b b4 f9 0d d6 34 ef 38 'a..p........4.8 00:22:29.565 00000110 da 69 08 1b b9 cc 8c 9b 29 6a 95 cc 53 25 e3 7c .i......)j..S%.| 00:22:29.565 00000120 2c ed f1 0a c8 68 2f 8b 87 c9 00 0b ad 8d 52 90 ,....h/.......R. 00:22:29.565 00000130 08 1a 3c 50 94 b1 65 27 19 d1 32 aa d5 b1 4e f6 ..*8aF.Y..c. 00:22:29.565 00000160 c5 a3 42 ef 79 40 9e 37 0f bc f2 dd 9b f3 6c c0 ..B.y@.7......l. 00:22:29.565 00000170 7d 0f ce 4e 33 b2 50 99 de d9 a5 f1 73 80 37 02 }..N3.P.....s.7. 00:22:29.565 00000180 b0 53 d8 b0 8e 65 89 ab 78 40 80 07 75 61 63 ad .S...e..x@..uac. 00:22:29.565 00000190 48 b0 f0 d4 52 a0 17 52 a8 17 7a d2 58 64 85 b2 H...R..R..z.Xd.. 00:22:29.565 000001a0 1f 28 2c c3 fb 19 54 b1 34 af ac 09 8b 82 45 7a .(,...T.4.....Ez 00:22:29.565 000001b0 69 8d 89 52 a2 72 b6 bc 7b 6f dc 84 22 ac d1 5b i..R.r..{o.."..[ 00:22:29.565 000001c0 ea 0a 89 85 54 31 bc 64 b2 dd 9a e9 3d 11 a2 aa ....T1.d....=... 00:22:29.565 000001d0 ba e4 c0 71 a2 6e f7 4e 11 07 2c 7d ee 0b c3 9c ...q.n.N..,}.... 00:22:29.565 000001e0 67 1e c7 7c db 61 c2 ae 69 2a 20 b7 0c 5c 3d cb g..|.a..i* ..\=. 00:22:29.565 000001f0 ed 0d 46 7a 05 84 85 e4 29 18 c3 9c 6e 10 a3 30 ..Fz....)...n..0 00:22:29.565 00000200 84 f3 12 d3 d1 43 55 da 69 2c 6e 2f 74 fa 29 f1 .....CU.i,n/t.). 00:22:29.565 00000210 2f 01 08 37 9b 78 76 c5 23 39 a7 a2 77 fe 1c 9f /..7.xv.#9..w... 00:22:29.565 00000220 37 a1 35 03 e4 57 3a 9e d6 03 41 26 ea 9b 02 00 7.5..W:...A&.... 00:22:29.565 00000230 8b 18 63 57 84 68 fb 02 fe bf a6 b3 5e d5 ad 3e ..cW.h......^..> 00:22:29.565 00000240 af 0b d6 03 09 64 93 00 c2 9a d8 58 c7 d0 56 09 .....d.....X..V. 00:22:29.565 00000250 73 ef 6c 1b e7 7d 29 c7 00 8b 3c 08 20 77 11 4b s.l..})...<. w.K 00:22:29.565 00000260 37 7b 4c 59 1e c9 ca 1a 19 8c 09 9c 48 65 24 9c 7{LY........He$. 00:22:29.565 00000270 b9 54 21 20 36 dc 86 eb 99 cb e8 4d c9 0d 51 23 .T! 6......M..Q# 00:22:29.565 00000280 12 60 e4 09 ce dc e9 7f 79 70 87 ac fe a2 ef 0c .`......yp...... 00:22:29.565 00000290 a8 5c f9 d4 cf 49 29 77 f0 74 a1 d8 bb ad b6 4f .\...I)w.t.....O 00:22:29.565 000002a0 bd 89 81 5b 5e f0 31 b4 48 2b 4b 58 db e8 7c b4 ...[^.1.H+KX..|. 00:22:29.565 000002b0 58 70 bd 89 e5 74 58 99 00 e9 f8 10 64 7b fe 61 Xp...tX.....d{.a 00:22:29.565 000002c0 ea 31 41 b2 29 cb 85 55 e8 56 09 68 1c 2e e3 2d .1A.)..U.V.h...- 00:22:29.565 000002d0 61 03 52 61 dd 38 f6 76 8d f7 6e 3c d1 cf a3 41 a.Ra.8.v..n<...A 00:22:29.565 000002e0 01 6c 1e 46 66 39 78 ff 0f 3c 65 5b df 62 96 84 .l.Ff9x........Am..U[.. 00:22:29.565 00000350 46 e5 63 52 62 40 de 84 9c e2 18 ec da 5c 3c 32 F.cRb@.......\<2 00:22:29.565 00000360 60 9d 3d 69 a1 2f 9c 84 12 22 ef 2e 31 d6 f2 0d `.=i./..."..1... 00:22:29.565 00000370 a0 06 8a 76 34 8e c4 de a2 69 54 cb 7b 5f ff 1e ...v4....iT.{_.. 00:22:29.565 00000380 33 2d 05 3e 0f 56 2b a5 ad 24 fb 9c ff 6d dc a0 3-.>.V+..$...m.. 00:22:29.565 00000390 c1 0f d6 a7 14 e5 c4 ce b7 90 66 77 2f cd f7 84 ..........fw/... 00:22:29.565 000003a0 ef 52 ee b1 ef e9 44 85 de 71 84 0d 23 01 c4 3e .R....D..q..#..> 00:22:29.565 000003b0 e7 bc 1b 8d 7b 05 9d db 43 40 26 43 2e 26 02 af ....{...C@&C.&.. 00:22:29.565 000003c0 bc ab 45 c9 69 95 89 f7 0d a8 39 ea 59 ac c7 ea ..E.i.....9.Y... 00:22:29.565 000003d0 31 74 fe a7 7e 5c e3 2c 7c 25 42 86 07 69 b0 90 1t..~\.,|%B..i.. 00:22:29.565 000003e0 94 23 fc 70 94 16 af ec ca 62 cc 4f 93 ae 50 71 .#.p.....b.O..Pq 00:22:29.565 000003f0 d9 29 4b 64 6c 3a cb ba 4d 18 ea 08 98 e9 22 99 .)Kdl:..M.....". 00:22:29.565 host pubkey: 00:22:29.565 00000000 2c cc 20 33 a8 19 d1 37 e7 ad 20 04 d2 7c 3d bd ,. 3...7.. ..|=. 00:22:29.565 00000010 8c e6 40 fb 17 f2 83 83 42 5c 07 fc 90 40 ef d3 ..@.....B\...@.. 00:22:29.565 00000020 80 69 d9 a5 95 bb ff 13 5f 60 6a f4 b5 ee a9 e6 .i......_`j..... 00:22:29.565 00000030 ae 42 65 75 eb 64 7e d0 9b 31 45 24 7a 7b 22 8c .Beu.d~..1E$z{". 00:22:29.565 00000040 ee 04 fa ac f2 8f 1e 96 6d bd 6a f0 1c 49 c5 71 ........m.j..I.q 00:22:29.565 00000050 4e 7d 9f f0 5f 49 24 f5 39 ab b4 6d 59 21 0d 75 N}.._I$.9..mY!.u 00:22:29.565 00000060 6e a7 d7 85 75 b2 78 d9 75 de 71 b6 18 89 57 b6 n...u.x.u.q...W. 00:22:29.565 00000070 4e ce d5 76 3f 59 2d b4 08 25 d6 60 53 dd 5a b2 N..v?Y-..%.`S.Z. 00:22:29.565 00000080 e8 0f 97 c3 6f 58 2f 4f 8c 3b 89 28 6b 5b 7a 50 ....oX/O.;.(k[zP 00:22:29.565 00000090 43 5e 8f fb 73 fb 29 cc f0 52 8c 90 e1 67 6f a9 C^..s.)..R...go. 00:22:29.565 000000a0 0d 70 0c fe b3 25 a7 25 81 84 42 0a e3 54 f4 fb .p...%.%..B..T.. 00:22:29.565 000000b0 7a 23 2d 36 59 73 b1 c9 92 e9 62 1e 8a 46 63 28 z#-6Ys....b..Fc( 00:22:29.565 000000c0 70 d5 4e 20 d0 3c 88 e6 68 9b 62 ab f3 38 83 4f p.N .<..h.b..8.O 00:22:29.565 000000d0 8f 2f da 83 57 61 ad be 43 2e 03 cb 08 3e 32 92 ./..Wa..C....>2. 00:22:29.565 000000e0 a8 f4 0b c6 4e 3c f2 d1 0c 6e 93 3d 98 c0 06 87 ....N<...n.=.... 00:22:29.565 000000f0 79 d4 28 65 d2 31 d7 81 0a bb d1 cb f4 6e f7 8c y.(e.1.......n.. 00:22:29.565 00000100 f1 95 42 e9 ee 55 1b 7a 8a ad 37 ce bf 86 5b 53 ..B..U.z..7...[S 00:22:29.565 00000110 54 10 ee 78 2c 82 55 95 29 f4 06 5d 9e 13 8d 7b T..x,.U.)..]...{ 00:22:29.565 00000120 13 db 70 d1 ac 33 4d a9 5f 8a ba 34 35 2b d1 11 ..p..3M._..45+.. 00:22:29.565 00000130 40 0d 2c b9 88 87 28 5c 00 fd a0 67 02 8e f3 0c @.,...(\...g.... 00:22:29.565 00000140 f9 d7 a6 83 97 0b 29 c9 fc 78 14 3d 2b fd 27 96 ......)..x.=+.'. 00:22:29.565 00000150 77 00 14 5a 0a 2c ce 62 6d 0b 6e 9e 3f 9d a7 fb w..Z.,.bm.n.?... 00:22:29.565 00000160 98 ed f8 85 cc 02 c7 08 e6 e4 93 a4 1f 53 7a 9a .............Sz. 00:22:29.565 00000170 de ec f3 0d 46 e5 97 52 76 5f 93 98 47 97 e8 46 ....F..Rv_..G..F 00:22:29.565 00000180 aa a9 62 68 6d a1 bd 3f 7f 81 13 b3 6b 88 42 9d ..bhm..?....k.B. 00:22:29.565 00000190 46 04 51 7a 48 78 ad cd 59 54 4f 44 d5 07 17 7c F.QzHx..YTOD...| 00:22:29.565 000001a0 91 65 f5 61 78 73 df d8 4b ed be 4b 8b 41 51 0a .e.axs..K..K.AQ. 00:22:29.565 000001b0 cb ae 8c bc e5 a4 78 35 33 74 f7 6f b8 2a ee 50 ......x53t.o.*.P 00:22:29.565 000001c0 d4 e9 f5 b1 e5 56 0b 82 91 db 6a 63 36 37 5e 4c .....V....jc67^L 00:22:29.565 000001d0 1e ca 72 e7 08 59 91 74 11 b0 b8 34 c5 35 53 39 ..r..Y.t...4.5S9 00:22:29.565 000001e0 19 1b 5b 2c 28 12 b9 dd cd e0 1b 96 1f df 2b f6 ..[,(.........+. 00:22:29.565 000001f0 ec 1c 1b 8f 59 2d b1 f5 6a a7 b4 4a bb 33 6c 30 ....Y-..j..J.3l0 00:22:29.565 00000200 06 9d a4 f6 e5 1c ec 70 c0 f3 ea ca 57 89 5d 04 .......p....W.]. 00:22:29.565 00000210 de 32 1c 0f f7 a1 26 4b 5d e1 15 fb f9 84 b8 a4 .2....&K]....... 00:22:29.565 00000220 9f 36 bb 32 ff 59 48 58 c5 e7 bd 60 3d 34 1d 60 .6.2.YHX...`=4.` 00:22:29.565 00000230 6a f6 a9 bb 22 f4 c4 33 16 fc 51 cb 74 62 f4 11 j..."..3..Q.tb.. 00:22:29.565 00000240 18 9a 3c 1b 9b 2b c8 50 ca 2c e6 e5 bb 69 61 9d ..<..+.P.,...ia. 00:22:29.565 00000250 75 5d f9 5a e4 14 f9 1e 2f ba af 15 dc 3c 99 9f u].Z..../....<.. 00:22:29.565 00000260 b4 b1 75 08 62 cc c5 c7 1b bf 19 5b de 9e a8 54 ..u.b......[...T 00:22:29.565 00000270 68 05 2a c9 2b 1d 88 e5 83 af 9d 8a 51 7b 63 9c h.*.+.......Q{c. 00:22:29.565 00000280 3f b6 1b 20 4c a3 0a aa cd 67 3c 50 00 57 c4 27 ?.. L....g.\.....C<. 00:22:29.565 00000110 c4 c4 3a 68 8c 8f 13 59 8b d0 03 f0 80 4b db 2e ..:h...Y.....K.. 00:22:29.565 00000120 63 2a a7 cf 9f 21 09 91 55 57 d8 9b 56 9d 11 4f c*...!..UW..V..O 00:22:29.565 00000130 2d 2e 79 ef 7b a5 11 ef 50 dd f2 b8 a6 03 45 5c -.y.{...P.....E\ 00:22:29.565 00000140 23 4e 24 0f 57 eb c5 a1 85 61 8d 67 14 49 59 57 #N$.W....a.g.IYW 00:22:29.565 00000150 b1 42 3c 1c 45 b0 b5 63 4e 7c a9 17 d5 a0 63 32 .B<.E..cN|....c2 00:22:29.565 00000160 0e 34 52 f0 5b ac 04 2d 65 42 6d 77 82 76 f5 68 .4R.[..-eBmw.v.h 00:22:29.565 00000170 5b 87 8d 7a c8 a0 2b 93 f7 4e f8 25 7d 15 8f 13 [..z..+..N.%}... 00:22:29.565 00000180 e5 2a 46 96 48 ec 7a 17 ce e9 11 e2 8d 89 a5 78 .*F.H.z........x 00:22:29.565 00000190 ba 5e e9 0e 6a 20 8a a3 dc 10 be ec 07 6a 14 5a .^..j .......j.Z 00:22:29.565 000001a0 22 c9 7d c4 00 4c 6f 63 27 37 da ba a7 0a c4 ff ".}..Loc'7...... 00:22:29.565 000001b0 1f ad 3f 9a 09 94 2c 5c c5 47 61 e8 f3 ba fa 95 ..?...,\.Ga..... 00:22:29.565 000001c0 75 84 75 98 7b 0e a6 80 c6 73 8d 2d 71 88 c5 4e u.u.{....s.-q..N 00:22:29.565 000001d0 4b 6a e4 78 f6 85 f8 1d e7 21 ab 5f 9e 3f b5 2b Kj.x.....!._.?.+ 00:22:29.565 000001e0 26 dc 6a 7b 1c 59 9a 9f 78 52 b2 e0 7a d0 d9 df &.j{.Y..xR..z... 00:22:29.565 000001f0 86 b8 21 67 86 ba 53 83 2a b8 79 20 dd de 11 03 ..!g..S.*.y .... 00:22:29.565 00000200 d8 24 f2 d7 ae 6d a4 22 d5 c4 fa b9 01 28 b6 2e .$...m.".....(.. 00:22:29.565 00000210 10 72 b6 1f 93 32 b6 cb 0e 21 8a 7f 6f 7c d0 26 .r...2...!..o|.& 00:22:29.565 00000220 1d 6f 64 66 2f f1 d3 7f 4b 47 db 9f 7c a6 9e 03 .odf/...KG..|... 00:22:29.565 00000230 38 a7 ed ed c6 3d c4 c0 aa c6 0c ce c9 cd 3d aa 8....=........=. 00:22:29.565 00000240 14 ce 36 ec 12 f6 05 08 3a ed 70 5a 89 70 30 9b ..6.....:.pZ.p0. 00:22:29.565 00000250 ff 00 c8 c5 0e 05 c5 63 72 81 5b cd b9 88 77 43 .......cr.[...wC 00:22:29.565 00000260 5a 09 97 69 7e b1 74 83 c1 96 ab b0 9e db ed 84 Z..i~.t......... 00:22:29.565 00000270 3b 42 cb 3e 4c c7 94 1b d4 02 78 ce 9e c5 da ae ;B.>L.....x..... 00:22:29.565 00000280 f3 46 79 58 71 61 86 a8 66 7c 2c 6f 86 1e f9 59 .FyXqa..f|,o...Y 00:22:29.565 00000290 f5 ba 8a 0e de 59 47 ec 2e 5a a8 b8 78 1b e7 ba .....YG..Z..x... 00:22:29.565 000002a0 9f 80 36 50 08 52 ec df f8 96 fb f5 58 3d 82 4a ..6P.R......X=.J 00:22:29.565 000002b0 29 11 fd 53 e1 cd 0a 28 79 4f 72 16 65 16 03 ee )..S...(yOr.e... 00:22:29.565 000002c0 e9 37 ab 15 96 31 38 a9 91 ce 56 ac d3 68 86 bf .7...18...V..h.. 00:22:29.565 000002d0 78 6f e4 2b 09 03 08 25 64 95 ba da a6 7c 50 b0 xo.+...%d....|P. 00:22:29.565 000002e0 cc 00 4a c5 c1 98 40 b4 41 a1 b5 16 48 98 79 92 ..J...@.A...H.y. 00:22:29.565 000002f0 b7 c3 df 73 16 a6 7e 5f 59 e7 3b c8 83 12 2b dd ...s..~_Y.;...+. 00:22:29.565 00000300 9f 5e 08 a7 c3 01 16 76 ca f9 8a 00 ee 1d 0d 5c .^.....v.......\ 00:22:29.565 00000310 c0 d9 4b d3 d0 bb 46 06 3a 00 a3 d5 b3 62 4b cd ..K...F.:....bK. 00:22:29.565 00000320 73 25 4f 31 aa ab 93 b9 6a 98 28 8c 95 15 2b 64 s%O1....j.(...+d 00:22:29.565 00000330 9b 46 96 9f 1e b7 75 f2 12 11 09 30 6f 0c 75 a8 .F....u....0o.u. 00:22:29.565 00000340 06 a4 48 65 93 21 6c 8a 05 c9 99 0d f0 1f f3 a2 ..He.!l......... 00:22:29.565 00000350 eb 83 ae 35 61 0a a9 06 5b 62 5d c4 f0 73 9d 52 ...5a...[b]..s.R 00:22:29.565 00000360 63 53 d2 d4 51 93 80 74 ef cf f0 0d ea e8 ef b5 cS..Q..t........ 00:22:29.565 00000370 16 90 62 f1 54 6f 3c 22 d1 05 8b 01 3e 7f 80 ba ..b.To<"....>... 00:22:29.565 00000380 05 2d 20 65 69 6d e5 52 07 76 2e 5e 41 2e 0b 3d .- eim.R.v.^A..= 00:22:29.565 00000390 84 a8 ba 83 27 67 8d f2 4a b3 b7 92 d9 74 43 6d ....'g..J....tCm 00:22:29.565 000003a0 ea d1 04 16 c7 17 e4 72 52 07 74 6a 38 eb 0f 6d .......rR.tj8..m 00:22:29.565 000003b0 fb 57 eb 45 44 68 d9 19 bb d7 9b 32 7d 8c 81 34 .W.EDh.....2}..4 00:22:29.565 000003c0 4c 78 9d 10 3b e4 d5 72 d1 33 30 86 a7 b1 1f ea Lx..;..r.30..... 00:22:29.565 000003d0 be 85 f7 3e b4 37 4e 86 14 5e ba 3b 57 4d e4 5e ...>.7N..^.;WM.^ 00:22:29.565 000003e0 62 11 e3 8b df c1 b1 85 4a 3f b7 d5 82 a1 05 75 b.......J?.....u 00:22:29.565 000003f0 7b b6 85 42 7f 2f 4f 19 c8 69 79 0d 68 97 9b df {..B./O..iy.h... 00:22:29.566 [2024-09-27 13:27:29.017918] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key2, hash=3, dhgroup=5, seq=3775755318, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.566 [2024-09-27 13:27:29.018225] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.566 [2024-09-27 13:27:29.101586] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.566 [2024-09-27 13:27:29.102004] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.566 [2024-09-27 13:27:29.102136] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.566 [2024-09-27 13:27:29.102439] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.566 [2024-09-27 13:27:29.255793] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.566 [2024-09-27 13:27:29.255953] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.566 [2024-09-27 13:27:29.256065] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 5 (ffdhe8192) 00:22:29.566 [2024-09-27 13:27:29.256160] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.566 [2024-09-27 13:27:29.256381] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.566 ctrlr pubkey: 00:22:29.566 00000000 12 da 26 d1 81 24 b3 89 24 a9 3a 7f cd 75 69 43 ..&..$..$.:..uiC 00:22:29.566 00000010 f0 0b 7d 57 d5 f5 e0 f3 10 5f ae 1c 2a e1 dc 2b ..}W....._..*..+ 00:22:29.566 00000020 3c 8a cf f8 62 43 ae e6 ea 31 28 a5 26 d8 86 62 <...bC...1(.&..b 00:22:29.566 00000030 07 0b 94 e3 c2 09 b1 50 82 89 ce ea 29 a1 a0 85 .......P....)... 00:22:29.566 00000040 da f2 45 93 58 ae ef 22 75 b6 46 85 a1 c3 04 c6 ..E.X.."u.F..... 00:22:29.566 00000050 e8 81 b2 bc 6b 9a 77 67 c7 67 fe 82 80 04 26 57 ....k.wg.g....&W 00:22:29.566 00000060 f4 2f c1 b9 fd 54 ff 1e a7 99 51 52 73 9b 5e f3 ./...T....QRs.^. 00:22:29.566 00000070 70 d0 8d be 94 f1 36 22 c4 fc fd 21 45 c2 45 a4 p.....6"...!E.E. 00:22:29.566 00000080 c2 d9 99 d5 1d 28 ed b1 b4 fd b2 e8 28 43 40 06 .....(......(C@. 00:22:29.566 00000090 04 cf 09 7e f0 6f aa cb 45 75 27 5a 59 a5 19 66 ...~.o..Eu'ZY..f 00:22:29.566 000000a0 d0 de 2c 0b f1 20 e2 1c 62 3a 59 4c b4 52 62 e9 ..,.. ..b:YL.Rb. 00:22:29.566 000000b0 17 cf 89 b3 f6 c1 c1 bc e1 e4 1a e6 44 5d df 3d ............D].= 00:22:29.566 000000c0 da ef 53 00 71 ac 00 9f 57 fe 9e 6c d1 e5 42 2f ..S.q...W..l..B/ 00:22:29.566 000000d0 75 63 33 ad 08 d0 46 f6 97 45 a2 b8 21 67 42 35 uc3...F..E..!gB5 00:22:29.566 000000e0 77 af 08 0a cf 75 64 7e 0f 96 94 ed 42 81 75 4d w....ud~....B.uM 00:22:29.566 000000f0 d5 87 a1 2a eb b4 50 50 e4 fc c4 55 e2 be 92 22 ...*..PP...U..." 00:22:29.566 00000100 b2 f9 28 c4 d2 86 fb 29 d1 9f ef 4b 6a 6b e2 87 ..(....)...Kjk.. 00:22:29.566 00000110 e6 6c 85 93 2d d0 69 5e 2f eb e4 a4 9e df 39 0c .l..-.i^/.....9. 00:22:29.566 00000120 64 cc ae 38 89 f7 05 6b e7 02 e9 0f 1b 95 3e e6 d..8...k......>. 00:22:29.566 00000130 76 e8 6c 26 5e 1f 0d 43 88 c1 cf 6a eb 0f c8 58 v.l&^..C...j...X 00:22:29.566 00000140 64 cb 65 af 66 c8 7a 1f 6e 51 69 7d fc 69 ac bc d.e.f.z.nQi}.i.. 00:22:29.566 00000150 58 9d d5 78 d7 31 67 e8 97 b2 54 9c b3 65 c4 2b X..x.1g...T..e.+ 00:22:29.566 00000160 cd 4b 79 33 9f 14 b8 c7 38 7b 13 bb 12 32 ed 3a .Ky3....8{...2.: 00:22:29.566 00000170 f8 34 40 ba 29 18 41 d4 8f 4f 44 16 c8 5c 82 0e .4@.).A..OD..\.. 00:22:29.566 00000180 87 5e 65 74 55 ed f9 44 f3 dd fc 4a b3 3f 05 e6 .^etU..D...J.?.. 00:22:29.566 00000190 56 c4 8e 89 d7 23 d3 cf 5b 08 16 de 4a 04 ae 5b V....#..[...J..[ 00:22:29.566 000001a0 84 73 d8 fa 9a ef 56 49 f8 90 e3 cc dc 9a a5 45 .s....VI.......E 00:22:29.566 000001b0 e4 46 c5 ea 81 2b 2d 8a 3e 42 03 fc 86 14 a7 86 .F...+-.>B...... 00:22:29.566 000001c0 17 ed ad e0 e0 ec f4 3d c0 a0 15 02 2c 94 75 22 .......=....,.u" 00:22:29.566 000001d0 dc 9a 97 3a 08 72 6f 87 80 be 07 a0 78 c5 50 03 ...:.ro.....x.P. 00:22:29.566 000001e0 d1 eb c1 0a 37 96 1b fb 50 35 9e 5a 26 a5 92 11 ....7...P5.Z&... 00:22:29.566 000001f0 41 38 92 09 64 c0 c7 d3 18 24 f7 c8 4c 52 77 6b A8..d....$..LRwk 00:22:29.566 00000200 42 15 25 42 10 16 18 9a 85 a0 b2 75 3b ed 20 f0 B.%B.......u;. . 00:22:29.566 00000210 d9 52 08 e6 5a 05 27 17 ad df 6b ee c3 4b 89 64 .R..Z.'...k..K.d 00:22:29.566 00000220 0c db f4 f3 da f7 45 4a bd 66 81 35 03 b7 de e2 ......EJ.f.5.... 00:22:29.566 00000230 36 e4 b4 5d d8 d0 8d 40 c2 09 cb 1c fc e3 cd a0 6..]...@........ 00:22:29.566 00000240 b1 9e 4c bf 26 68 38 20 34 51 3f 42 b9 e6 9f d5 ..L.&h8 4Q?B.... 00:22:29.566 00000250 31 7f 34 ca 41 17 52 0e 41 bb 1e 4d be 69 75 c5 1.4.A.R.A..M.iu. 00:22:29.566 00000260 b5 4f f8 b3 92 59 d4 8f a5 70 f6 ae cc 89 e0 0a .O...Y...p...... 00:22:29.566 00000270 aa f9 1c 42 ce 21 4b 42 c4 b5 5b 45 77 7c 5f a2 ...B.!KB..[Ew|_. 00:22:29.566 00000280 ce d2 15 68 ac a0 11 95 cf 68 61 20 31 bc 6b 72 ...h.....ha 1.kr 00:22:29.566 00000290 4a 23 5f 80 fe 86 f8 f7 8d 03 30 6d 78 af f8 da J#_.......0mx... 00:22:29.566 000002a0 79 6a d9 af 7c bf ee 3d 50 02 7f af 6d e1 fb 11 yj..|..=P...m... 00:22:29.566 000002b0 a5 ed 40 d1 60 82 ad c1 34 8b f8 8f 4e d9 51 ad ..@.`...4...N.Q. 00:22:29.566 000002c0 cb 86 23 c3 e0 34 4b 5a b5 86 f0 7f 17 5f c0 71 ..#..4KZ....._.q 00:22:29.566 000002d0 b6 23 c6 e9 5e c0 61 66 d7 d0 89 59 4d c8 82 de .#..^.af...YM... 00:22:29.566 000002e0 9c 4d 1e d8 6f 54 31 0d 0a 21 e3 58 9d a7 a6 db .M..oT1..!.X.... 00:22:29.566 000002f0 3c 0f e3 43 b8 ce 17 2a 18 50 d3 60 57 b1 8a 84 <..C...*.P.`W... 00:22:29.566 00000300 ee 7a 8e c7 a4 be ba fe 36 a9 eb 1a 57 15 15 15 .z......6...W... 00:22:29.566 00000310 55 84 dc 74 d4 60 68 24 36 16 0c a1 b9 0d 47 ca U..t.`h$6.....G. 00:22:29.566 00000320 85 ec c7 0a a2 6c 4b da fd 4f 88 23 66 bb a6 b5 .....lK..O.#f... 00:22:29.566 00000330 5d 2f e1 ac b7 e4 ea a0 ff e4 a6 34 d0 d7 9f 05 ]/.........4.... 00:22:29.566 00000340 ad 05 61 78 c4 7f 27 d5 00 a1 0c 50 15 af de 6f ..ax..'....P...o 00:22:29.566 00000350 03 2d f0 b2 56 ea 74 9e f1 2b d7 54 40 61 63 fd .-..V.t..+.T@ac. 00:22:29.566 00000360 f0 e0 19 ff 6f 1b 40 3f ce 35 f6 27 9a 6a 84 2c ....o.@?.5.'.j., 00:22:29.566 00000370 a2 44 b1 20 5d dc 9e 47 0d 96 67 15 c2 94 11 52 .D. ]..G..g....R 00:22:29.566 00000380 56 05 69 ec 7d 07 c8 3b 54 7f 66 ef 7b b9 ce c4 V.i.}..;T.f.{... 00:22:29.566 00000390 48 2b 20 df b4 04 09 37 7c a6 39 14 d9 f3 7a 75 H+ ....7|.9...zu 00:22:29.566 000003a0 1e b2 6f c8 ed c5 ab 69 d4 0f ae 35 06 85 3d 64 ..o....i...5..=d 00:22:29.566 000003b0 cd 31 19 60 f9 34 24 9e 54 f3 0b 7e a2 f2 85 da .1.`.4$.T..~.... 00:22:29.566 000003c0 7a 28 0e f2 ad 19 ff 17 46 d8 46 5f 97 36 11 45 z(......F.F_.6.E 00:22:29.566 000003d0 45 da 8b 36 01 e7 ed 7a 43 85 77 53 e6 9a 4d c2 E..6...zC.wS..M. 00:22:29.566 000003e0 91 af 6f 17 bf 42 10 c3 1b 9c 36 0a b4 74 5f fe ..o..B....6..t_. 00:22:29.566 000003f0 a4 bf 2f d3 7a e1 d5 09 80 45 fe 9e 2c 48 b5 d0 ../.z....E..,H.. 00:22:29.566 host pubkey: 00:22:29.566 00000000 2f 73 3c cc 91 7f 87 cb c9 f9 da dc 9f b2 f1 dd /s<............. 00:22:29.566 00000010 10 c3 4f c4 ab f2 20 3e 91 4f 61 e8 ca e2 ce d2 ..O... >.Oa..... 00:22:29.566 00000020 ca a0 11 09 43 3e e6 45 8f 98 10 54 4c 9d c1 77 ....C>.E...TL..w 00:22:29.566 00000030 41 d4 fb d3 67 d3 8a ff b4 88 85 80 f2 a7 33 e6 A...g.........3. 00:22:29.566 00000040 78 96 13 26 66 8d b9 c5 f5 69 16 7a 4c e9 c5 3c x..&f....i.zL..< 00:22:29.566 00000050 ff b4 d5 c5 e6 cb 2c 17 35 23 a9 9c ee 9b ae aa ......,.5#...... 00:22:29.566 00000060 1d 88 0b f7 97 4e e6 b1 83 54 1a 81 88 de db e3 .....N...T...... 00:22:29.566 00000070 af 34 58 61 f5 cc 7e 38 b0 93 08 95 57 9f 95 83 .4Xa..~8....W... 00:22:29.566 00000080 2b d4 ac a2 98 ba 47 c6 2e 87 06 eb c8 5b 8b 48 +.....G......[.H 00:22:29.566 00000090 23 98 f7 3b 74 25 c6 ae e3 9f ca b7 e9 1b 38 16 #..;t%........8. 00:22:29.566 000000a0 61 8b cd be 1c a8 c7 77 8c 2a 4f ee ca 83 35 59 a......w.*O...5Y 00:22:29.566 000000b0 32 4c 13 2e 57 0d ed 4d e9 5d 55 ba eb ba 40 1c 2L..W..M.]U...@. 00:22:29.566 000000c0 87 1c cb 9b 4f 70 69 5f 5d 39 54 f7 16 06 ec 01 ....Opi_]9T..... 00:22:29.566 000000d0 6f a3 99 cc a6 c9 01 7f 07 92 da 61 85 6e fe b1 o..........a.n.. 00:22:29.566 000000e0 67 7c 56 88 80 94 9d 1e 54 48 4d fa 50 e7 ac d2 g|V.....THM.P... 00:22:29.566 000000f0 c8 7b b6 eb 5e ac ce 30 57 9b 32 82 a9 52 98 60 .{..^..0W.2..R.` 00:22:29.566 00000100 40 e6 b3 84 a3 a7 f8 c4 f0 18 3c 47 06 fd 34 59 @.........8...*@ 00:22:29.566 00000160 a3 c5 25 26 a5 76 11 2b 15 7a a2 7d f8 4f 23 33 ..%&.v.+.z.}.O#3 00:22:29.566 00000170 b9 64 18 14 7a 32 ec 84 d8 6f b9 a7 ff bb 46 7c .d..z2...o....F| 00:22:29.566 00000180 d5 3d 95 45 f0 a4 9c a0 ae bd d6 8b b2 5d 20 3a .=.E.........] : 00:22:29.566 00000190 57 cf 48 24 04 1c 8a a1 2a ca 69 6d 13 19 f8 9e W.H$....*.im.... 00:22:29.566 000001a0 07 24 ba 66 3c 33 57 27 03 59 4f 3f 6b 0c 16 df .$.f<3W'.YO?k... 00:22:29.566 000001b0 c2 39 cf db 3c f0 6a 34 44 f1 37 62 cf 2b 24 9a .9..<.j4D.7b.+$. 00:22:29.566 000001c0 64 03 63 7e 41 d2 c0 e6 08 dd 4a f8 fb da e5 fd d.c~A.....J..... 00:22:29.566 000001d0 c1 31 79 c9 9f 56 dd bc e8 fc bc d3 1e 6a a0 ab .1y..V.......j.. 00:22:29.566 000001e0 f9 27 35 5c bd 43 d8 df ec ee 76 25 c4 d7 be 36 .'5\.C....v%...6 00:22:29.566 000001f0 e0 5a 6d bf 96 e0 fd ca 4c a0 86 dc dd 34 4e 4b .Zm.....L....4NK 00:22:29.566 00000200 70 6b a1 a3 6f d6 38 8c 4a c7 26 95 44 b0 3e ab pk..o.8.J.&.D.>. 00:22:29.566 00000210 93 20 ed 36 f6 19 43 40 e4 15 e6 6f a5 d4 d0 75 . .6..C@...o...u 00:22:29.566 00000220 c5 4e 29 0d b5 65 7a 5e 71 fa 01 1c 1b b0 e6 8b .N)..ez^q....... 00:22:29.566 00000230 84 fa 2a 51 1b 70 34 44 18 31 b9 e0 02 1a 64 1a ..*Q.p4D.1....d. 00:22:29.566 00000240 0f 9f 12 a1 7c e6 17 02 47 82 6a 0a 91 5b 49 c2 ....|...G.j..[I. 00:22:29.566 00000250 0d 9f e5 7b 52 28 85 50 00 9c 64 41 6b da 89 06 ...{R(.P..dAk... 00:22:29.566 00000260 4e e4 7e a1 f7 83 0d b9 ea b4 dd a2 bb f0 aa 59 N.~............Y 00:22:29.566 00000270 83 73 e0 56 b8 9e 59 5f fe 94 b5 b1 a9 54 d8 c4 .s.V..Y_.....T.. 00:22:29.566 00000280 d0 85 01 6f 16 2c eb c2 cf e9 91 4e 91 a7 b4 f9 ...o.,.....N.... 00:22:29.566 00000290 67 a3 15 8b db ab de 91 0a 0e 1a 63 7a f7 96 3b g..........cz..; 00:22:29.566 000002a0 9d fa 48 b4 89 e8 bf 0b cc 4f fb 0a cc 03 e8 2b ..H......O.....+ 00:22:29.566 000002b0 58 2e bf ae 22 fb 93 29 03 d8 ea 07 fb 96 28 df X..."..)......(. 00:22:29.566 000002c0 28 34 4f f5 8e 77 61 af 83 2c e1 0d e6 b9 74 23 (4O..wa..,....t# 00:22:29.566 000002d0 a4 65 99 36 4c 0b bd 07 8c 5c 61 f6 16 ad 27 f6 .e.6L....\a...'. 00:22:29.566 000002e0 a8 23 0f 21 59 1a bd e7 a8 22 28 b6 a1 3e 94 c6 .#.!Y...."(..>.. 00:22:29.566 000002f0 e9 6d f1 8c 9f c3 ca ce 36 06 1f d0 2f 86 6b 7d .m......6.../.k} 00:22:29.566 00000300 d4 7a 2a 3f 46 9c 68 de 27 46 82 94 a2 1b e8 73 .z*?F.h.'F.....s 00:22:29.566 00000310 84 76 54 8c 81 bc d6 72 fd 13 e3 2f 55 49 38 ed .vT....r.../UI8. 00:22:29.566 00000320 0e 21 56 78 80 d4 02 6a 9b d1 74 c5 75 ad a3 6d .!Vx...j..t.u..m 00:22:29.566 00000330 d8 56 4b 97 12 77 83 16 ca 12 aa 9a 7d 37 ca 74 .VK..w......}7.t 00:22:29.566 00000340 a0 cc ce 36 fe 10 06 8f b2 eb c1 51 d9 cc b4 b6 ...6.......Q.... 00:22:29.566 00000350 89 6a 77 37 8d a7 90 85 24 36 1b e9 df b3 c6 a8 .jw7....$6...... 00:22:29.566 00000360 a3 64 b6 38 a3 c2 0a f8 0e 33 8a 84 ea 54 aa 65 .d.8.....3...T.e 00:22:29.566 00000370 fc 47 a7 7b 5a 78 de 91 16 91 20 87 0c a3 8d 70 .G.{Zx.... ....p 00:22:29.566 00000380 6d d8 33 14 14 56 c9 72 92 ab c0 0b 3d c5 6e c7 m.3..V.r....=.n. 00:22:29.566 00000390 a0 81 2c 14 22 f4 af 25 98 ae b2 28 1d 16 1a 80 ..,."..%...(.... 00:22:29.566 000003a0 a4 bf 05 2b 42 65 80 4b f4 7c 89 a5 00 03 36 de ...+Be.K.|....6. 00:22:29.566 000003b0 6a 2e 7d 5b f4 af 5f d0 2f 62 91 06 54 b3 ba bd j.}[.._./b..T... 00:22:29.566 000003c0 75 a2 b6 49 3f fa ae 1c 30 77 3a bd bc 34 6c e6 u..I?...0w:..4l. 00:22:29.566 000003d0 33 fd 17 11 ff 2e 14 ca 73 d7 e4 b2 6d 60 49 19 3.......s...m`I. 00:22:29.566 000003e0 1f 6b e5 95 fb 67 fd d8 e1 df eb 6c 01 91 d7 cc .k...g.....l.... 00:22:29.566 000003f0 ff 57 40 c9 8d 6b 85 46 78 6b 48 4b f0 79 7a 10 .W@..k.FxkHK.yz. 00:22:29.566 dh secret: 00:22:29.566 00000000 a2 0a bb 1b 84 45 15 1e c1 1c d8 79 cd 92 93 43 .....E.....y...C 00:22:29.566 00000010 de 41 e9 b9 ba 13 e3 41 2e c1 3c 1a 42 ac 45 f5 .A.....A..<.B.E. 00:22:29.566 00000020 56 69 c1 1f 44 d3 b8 d2 7c 42 39 c9 e2 c6 99 f6 Vi..D...|B9..... 00:22:29.566 00000030 18 1c e5 fc b1 f1 79 9a 89 48 e6 62 83 bd 12 58 ......y..H.b...X 00:22:29.566 00000040 f8 c0 f2 ff 2e a2 0c 3d f1 5b 0f 59 0f 5f 93 dc .......=.[.Y._.. 00:22:29.566 00000050 12 bb 38 40 06 fb a7 76 7e 0d d4 dc 35 dc c1 a1 ..8@...v~...5... 00:22:29.566 00000060 1f 74 9e 96 9f df 8b 78 c5 9d 24 d8 fa b5 da 63 .t.....x..$....c 00:22:29.566 00000070 9c 4d 28 0c fe 19 d3 80 95 0a 1d 86 73 51 13 ff .M(.........sQ.. 00:22:29.566 00000080 74 d9 2e a2 a4 d3 0c e4 bb 9e f5 92 23 12 19 52 t...........#..R 00:22:29.566 00000090 4f 79 07 7c 8c 50 56 5e 8b 99 c6 0e 8a e3 4c 53 Oy.|.PV^......LS 00:22:29.566 000000a0 60 59 cf 82 b6 1d 2b 4d af a2 24 80 70 fe 77 11 `Y....+M..$.p.w. 00:22:29.566 000000b0 ab b4 df ac c6 b3 3c b8 45 1e 2d 53 0d 5c b7 b1 ......<.E.-S.\.. 00:22:29.566 000000c0 e5 04 c5 9e 61 30 36 30 60 56 b9 d2 19 e2 2f 4a ....a060`V..../J 00:22:29.566 000000d0 15 1d 34 9a 22 7c 7c 5c d1 ee 3b 37 19 11 20 1f ..4."||\..;7.. . 00:22:29.566 000000e0 1f e6 9e 22 b4 f6 30 84 d8 5e d3 f9 5e 93 5b 36 ..."..0..^..^.[6 00:22:29.566 000000f0 81 c7 6f 2f 32 35 e4 bf 0b f5 0f 13 00 e2 58 82 ..o/25........X. 00:22:29.566 00000100 f1 53 cf 3f ce 5a 26 ff b5 23 2a c4 95 25 18 4c .S.?.Z&..#*..%.L 00:22:29.566 00000110 6f 1c a7 8b ae 75 29 62 e9 d3 87 e0 a9 2d 69 77 o....u)b.....-iw 00:22:29.566 00000120 b8 42 e0 43 d6 bc 27 02 cc 6f 72 dc 6e 93 db 29 .B.C..'..or.n..) 00:22:29.566 00000130 4a fd 11 b4 6c 85 f9 a1 fa 30 1f 92 90 c4 e2 a1 J...l....0...... 00:22:29.566 00000140 c0 17 2d a4 f0 f2 b6 9e dd 58 e8 a2 2a 9b ce d4 ..-......X..*... 00:22:29.566 00000150 19 c3 f5 4e a2 dd 6b b8 c6 c5 ac db 98 cf 53 77 ...N..k.......Sw 00:22:29.566 00000160 87 f7 a8 24 8a b4 a8 48 42 15 a8 2c 38 8f 89 f1 ...$...HB..,8... 00:22:29.566 00000170 e2 e4 3c 15 ee 42 42 69 38 e1 7e 10 85 70 5b d2 ..<..BBi8.~..p[. 00:22:29.566 00000180 cf 4a 64 40 0c d6 f1 72 6f ff 6c f3 af 29 d6 c7 .Jd@...ro.l..).. 00:22:29.566 00000190 b1 9a 08 f4 cc 42 a0 8f e6 29 12 09 fd 90 8e 4e .....B...).....N 00:22:29.566 000001a0 85 ae 1f d0 ec cd f3 66 f1 01 c5 1c d4 90 49 1a .......f......I. 00:22:29.566 000001b0 eb 6f cc d5 b0 a7 73 b3 d8 7d 88 90 c8 10 6d 42 .o....s..}....mB 00:22:29.566 000001c0 3b 36 fd e5 bb 35 e6 7d 98 b6 0d d4 4e 21 95 1c ;6...5.}....N!.. 00:22:29.566 000001d0 66 be 6d 2e d7 95 06 a1 a9 2d 94 01 99 47 f6 69 f.m......-...G.i 00:22:29.566 000001e0 dc 75 dd cd b9 db c0 08 b7 e8 cb ea 52 25 5f 0c .u..........R%_. 00:22:29.566 000001f0 ba 1a 7f 25 8a d8 99 8a dc 40 84 48 ad 41 e2 30 ...%.....@.H.A.0 00:22:29.566 00000200 31 b1 28 d5 ab c3 f8 b2 66 79 1e d7 01 a3 18 a9 1.(.....fy...... 00:22:29.566 00000210 26 68 15 87 2a 1c dc 8f d0 69 b5 3e a1 87 b7 a0 &h..*....i.>.... 00:22:29.566 00000220 00 57 49 80 fd c3 fc 46 15 3f 2c f8 9e 29 16 cc .WI....F.?,..).. 00:22:29.566 00000230 9e 89 77 c9 44 0d 44 31 5e 2c f4 fd 62 29 3e 0d ..w.D.D1^,..b)>. 00:22:29.566 00000240 ad 98 ae e0 eb 21 18 03 5b ee 51 26 b1 fe cc 2f .....!..[.Q&.../ 00:22:29.566 00000250 57 b2 cd 6a 39 b4 1e 13 1d f7 82 91 1b f5 93 ae W..j9........... 00:22:29.566 00000260 de 30 3a 6c 77 ef e9 69 f0 f3 72 1f 0b a8 a0 71 .0:lw..i..r....q 00:22:29.566 00000270 31 6a 6c 9c fd 2d bb 42 54 07 93 2d 0a b5 46 44 1jl..-.BT..-..FD 00:22:29.566 00000280 09 fd 3d e9 f4 e8 61 5e 46 d6 e9 07 a8 0e e9 c0 ..=...a^F....... 00:22:29.566 00000290 92 91 1b 6a b4 ab 4a fe f5 ec b8 b9 02 a6 d6 f7 ...j..J......... 00:22:29.566 000002a0 c2 11 1a e1 a5 e7 a4 10 5a a0 63 45 86 ee 7c 11 ........Z.cE..|. 00:22:29.566 000002b0 3d 50 d2 e4 03 54 8e e5 b6 40 5b 72 5b b4 ff 13 =P...T...@[r[... 00:22:29.566 000002c0 f6 7b a8 cb 6c 99 dd 9c 26 6d 82 58 08 70 eb f6 .{..l...&m.X.p.. 00:22:29.566 000002d0 b3 b2 24 45 5f 8a 1e 01 d2 fa 0e ac 41 9c 76 9e ..$E_.......A.v. 00:22:29.566 000002e0 69 f7 17 ff 0e 3c e0 ce ae dc 28 58 a6 3a f1 45 i....<....(X.:.E 00:22:29.566 000002f0 68 9c 39 a3 5a e1 9b 18 c8 1e a2 72 9e 5f d7 96 h.9.Z......r._.. 00:22:29.566 00000300 a4 dd 41 06 09 86 fb 6c cf 52 a0 6c a3 b1 a4 de ..A....l.R.l.... 00:22:29.566 00000310 f4 99 96 92 5a db c0 f9 c2 ef 2c 79 00 17 cc 7e ....Z.....,y...~ 00:22:29.567 00000320 d4 cf 90 71 d9 38 89 e5 50 90 c9 b4 5d 16 1f 48 ...q.8..P...]..H 00:22:29.567 00000330 83 0e 59 95 0f 18 3c 21 88 1c cc bd 51 9f 94 0e ..Y....^%... 00:22:29.567 00000390 56 c0 93 fa 71 7d ef b7 86 3d b3 80 bb 47 79 93 V...q}...=...Gy. 00:22:29.567 000003a0 28 d0 4a 19 03 94 42 10 82 bf ac 79 97 32 6e a5 (.J...B....y.2n. 00:22:29.567 000003b0 ea 74 3c e2 c8 14 2b 33 9b 5e a6 03 de 8a 17 19 .t<...+3.^...... 00:22:29.567 000003c0 f8 75 24 36 e9 28 3b c6 83 fe 1d 6b 9e 12 f8 9f .u$6.(;....k.... 00:22:29.567 000003d0 51 ef 82 c6 ea d5 2c f4 6d 13 62 60 ea 0c a6 91 Q.....,.m.b`.... 00:22:29.567 000003e0 2f d2 4a 44 76 47 e1 c9 3d 50 a0 f7 e0 44 fb 0c /.JDvG..=P...D.. 00:22:29.567 000003f0 04 8d 80 46 31 10 99 7e 17 38 e2 bc 24 1a a8 23 ...F1..~.8..$..# 00:22:29.567 [2024-09-27 13:27:29.419820] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key3, hash=3, dhgroup=5, seq=3775755319, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.567 [2024-09-27 13:27:29.420152] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.567 [2024-09-27 13:27:29.505031] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.567 [2024-09-27 13:27:29.505447] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.567 [2024-09-27 13:27:29.505711] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success2 00:22:29.567 [2024-09-27 13:27:29.505945] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.567 [2024-09-27 13:27:29.557981] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.567 [2024-09-27 13:27:29.558313] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.567 [2024-09-27 13:27:29.558459] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 5 (ffdhe8192) 00:22:29.567 [2024-09-27 13:27:29.558653] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.567 [2024-09-27 13:27:29.558876] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.567 ctrlr pubkey: 00:22:29.567 00000000 12 da 26 d1 81 24 b3 89 24 a9 3a 7f cd 75 69 43 ..&..$..$.:..uiC 00:22:29.567 00000010 f0 0b 7d 57 d5 f5 e0 f3 10 5f ae 1c 2a e1 dc 2b ..}W....._..*..+ 00:22:29.567 00000020 3c 8a cf f8 62 43 ae e6 ea 31 28 a5 26 d8 86 62 <...bC...1(.&..b 00:22:29.567 00000030 07 0b 94 e3 c2 09 b1 50 82 89 ce ea 29 a1 a0 85 .......P....)... 00:22:29.567 00000040 da f2 45 93 58 ae ef 22 75 b6 46 85 a1 c3 04 c6 ..E.X.."u.F..... 00:22:29.567 00000050 e8 81 b2 bc 6b 9a 77 67 c7 67 fe 82 80 04 26 57 ....k.wg.g....&W 00:22:29.567 00000060 f4 2f c1 b9 fd 54 ff 1e a7 99 51 52 73 9b 5e f3 ./...T....QRs.^. 00:22:29.567 00000070 70 d0 8d be 94 f1 36 22 c4 fc fd 21 45 c2 45 a4 p.....6"...!E.E. 00:22:29.567 00000080 c2 d9 99 d5 1d 28 ed b1 b4 fd b2 e8 28 43 40 06 .....(......(C@. 00:22:29.567 00000090 04 cf 09 7e f0 6f aa cb 45 75 27 5a 59 a5 19 66 ...~.o..Eu'ZY..f 00:22:29.567 000000a0 d0 de 2c 0b f1 20 e2 1c 62 3a 59 4c b4 52 62 e9 ..,.. ..b:YL.Rb. 00:22:29.567 000000b0 17 cf 89 b3 f6 c1 c1 bc e1 e4 1a e6 44 5d df 3d ............D].= 00:22:29.567 000000c0 da ef 53 00 71 ac 00 9f 57 fe 9e 6c d1 e5 42 2f ..S.q...W..l..B/ 00:22:29.567 000000d0 75 63 33 ad 08 d0 46 f6 97 45 a2 b8 21 67 42 35 uc3...F..E..!gB5 00:22:29.567 000000e0 77 af 08 0a cf 75 64 7e 0f 96 94 ed 42 81 75 4d w....ud~....B.uM 00:22:29.567 000000f0 d5 87 a1 2a eb b4 50 50 e4 fc c4 55 e2 be 92 22 ...*..PP...U..." 00:22:29.567 00000100 b2 f9 28 c4 d2 86 fb 29 d1 9f ef 4b 6a 6b e2 87 ..(....)...Kjk.. 00:22:29.567 00000110 e6 6c 85 93 2d d0 69 5e 2f eb e4 a4 9e df 39 0c .l..-.i^/.....9. 00:22:29.567 00000120 64 cc ae 38 89 f7 05 6b e7 02 e9 0f 1b 95 3e e6 d..8...k......>. 00:22:29.567 00000130 76 e8 6c 26 5e 1f 0d 43 88 c1 cf 6a eb 0f c8 58 v.l&^..C...j...X 00:22:29.567 00000140 64 cb 65 af 66 c8 7a 1f 6e 51 69 7d fc 69 ac bc d.e.f.z.nQi}.i.. 00:22:29.567 00000150 58 9d d5 78 d7 31 67 e8 97 b2 54 9c b3 65 c4 2b X..x.1g...T..e.+ 00:22:29.567 00000160 cd 4b 79 33 9f 14 b8 c7 38 7b 13 bb 12 32 ed 3a .Ky3....8{...2.: 00:22:29.567 00000170 f8 34 40 ba 29 18 41 d4 8f 4f 44 16 c8 5c 82 0e .4@.).A..OD..\.. 00:22:29.567 00000180 87 5e 65 74 55 ed f9 44 f3 dd fc 4a b3 3f 05 e6 .^etU..D...J.?.. 00:22:29.567 00000190 56 c4 8e 89 d7 23 d3 cf 5b 08 16 de 4a 04 ae 5b V....#..[...J..[ 00:22:29.567 000001a0 84 73 d8 fa 9a ef 56 49 f8 90 e3 cc dc 9a a5 45 .s....VI.......E 00:22:29.567 000001b0 e4 46 c5 ea 81 2b 2d 8a 3e 42 03 fc 86 14 a7 86 .F...+-.>B...... 00:22:29.567 000001c0 17 ed ad e0 e0 ec f4 3d c0 a0 15 02 2c 94 75 22 .......=....,.u" 00:22:29.567 000001d0 dc 9a 97 3a 08 72 6f 87 80 be 07 a0 78 c5 50 03 ...:.ro.....x.P. 00:22:29.567 000001e0 d1 eb c1 0a 37 96 1b fb 50 35 9e 5a 26 a5 92 11 ....7...P5.Z&... 00:22:29.567 000001f0 41 38 92 09 64 c0 c7 d3 18 24 f7 c8 4c 52 77 6b A8..d....$..LRwk 00:22:29.567 00000200 42 15 25 42 10 16 18 9a 85 a0 b2 75 3b ed 20 f0 B.%B.......u;. . 00:22:29.567 00000210 d9 52 08 e6 5a 05 27 17 ad df 6b ee c3 4b 89 64 .R..Z.'...k..K.d 00:22:29.567 00000220 0c db f4 f3 da f7 45 4a bd 66 81 35 03 b7 de e2 ......EJ.f.5.... 00:22:29.567 00000230 36 e4 b4 5d d8 d0 8d 40 c2 09 cb 1c fc e3 cd a0 6..]...@........ 00:22:29.567 00000240 b1 9e 4c bf 26 68 38 20 34 51 3f 42 b9 e6 9f d5 ..L.&h8 4Q?B.... 00:22:29.567 00000250 31 7f 34 ca 41 17 52 0e 41 bb 1e 4d be 69 75 c5 1.4.A.R.A..M.iu. 00:22:29.567 00000260 b5 4f f8 b3 92 59 d4 8f a5 70 f6 ae cc 89 e0 0a .O...Y...p...... 00:22:29.567 00000270 aa f9 1c 42 ce 21 4b 42 c4 b5 5b 45 77 7c 5f a2 ...B.!KB..[Ew|_. 00:22:29.567 00000280 ce d2 15 68 ac a0 11 95 cf 68 61 20 31 bc 6b 72 ...h.....ha 1.kr 00:22:29.567 00000290 4a 23 5f 80 fe 86 f8 f7 8d 03 30 6d 78 af f8 da J#_.......0mx... 00:22:29.567 000002a0 79 6a d9 af 7c bf ee 3d 50 02 7f af 6d e1 fb 11 yj..|..=P...m... 00:22:29.567 000002b0 a5 ed 40 d1 60 82 ad c1 34 8b f8 8f 4e d9 51 ad ..@.`...4...N.Q. 00:22:29.567 000002c0 cb 86 23 c3 e0 34 4b 5a b5 86 f0 7f 17 5f c0 71 ..#..4KZ....._.q 00:22:29.567 000002d0 b6 23 c6 e9 5e c0 61 66 d7 d0 89 59 4d c8 82 de .#..^.af...YM... 00:22:29.567 000002e0 9c 4d 1e d8 6f 54 31 0d 0a 21 e3 58 9d a7 a6 db .M..oT1..!.X.... 00:22:29.567 000002f0 3c 0f e3 43 b8 ce 17 2a 18 50 d3 60 57 b1 8a 84 <..C...*.P.`W... 00:22:29.567 00000300 ee 7a 8e c7 a4 be ba fe 36 a9 eb 1a 57 15 15 15 .z......6...W... 00:22:29.567 00000310 55 84 dc 74 d4 60 68 24 36 16 0c a1 b9 0d 47 ca U..t.`h$6.....G. 00:22:29.567 00000320 85 ec c7 0a a2 6c 4b da fd 4f 88 23 66 bb a6 b5 .....lK..O.#f... 00:22:29.567 00000330 5d 2f e1 ac b7 e4 ea a0 ff e4 a6 34 d0 d7 9f 05 ]/.........4.... 00:22:29.567 00000340 ad 05 61 78 c4 7f 27 d5 00 a1 0c 50 15 af de 6f ..ax..'....P...o 00:22:29.567 00000350 03 2d f0 b2 56 ea 74 9e f1 2b d7 54 40 61 63 fd .-..V.t..+.T@ac. 00:22:29.567 00000360 f0 e0 19 ff 6f 1b 40 3f ce 35 f6 27 9a 6a 84 2c ....o.@?.5.'.j., 00:22:29.567 00000370 a2 44 b1 20 5d dc 9e 47 0d 96 67 15 c2 94 11 52 .D. ]..G..g....R 00:22:29.567 00000380 56 05 69 ec 7d 07 c8 3b 54 7f 66 ef 7b b9 ce c4 V.i.}..;T.f.{... 00:22:29.567 00000390 48 2b 20 df b4 04 09 37 7c a6 39 14 d9 f3 7a 75 H+ ....7|.9...zu 00:22:29.567 000003a0 1e b2 6f c8 ed c5 ab 69 d4 0f ae 35 06 85 3d 64 ..o....i...5..=d 00:22:29.567 000003b0 cd 31 19 60 f9 34 24 9e 54 f3 0b 7e a2 f2 85 da .1.`.4$.T..~.... 00:22:29.567 000003c0 7a 28 0e f2 ad 19 ff 17 46 d8 46 5f 97 36 11 45 z(......F.F_.6.E 00:22:29.567 000003d0 45 da 8b 36 01 e7 ed 7a 43 85 77 53 e6 9a 4d c2 E..6...zC.wS..M. 00:22:29.567 000003e0 91 af 6f 17 bf 42 10 c3 1b 9c 36 0a b4 74 5f fe ..o..B....6..t_. 00:22:29.567 000003f0 a4 bf 2f d3 7a e1 d5 09 80 45 fe 9e 2c 48 b5 d0 ../.z....E..,H.. 00:22:29.567 host pubkey: 00:22:29.567 00000000 e6 1d f2 e5 f1 61 a7 ba 64 29 7d ad 68 54 5e 4d .....a..d)}.hT^M 00:22:29.567 00000010 29 e3 79 0a d1 71 75 98 de 83 a4 da 21 8a e3 4d ).y..qu.....!..M 00:22:29.567 00000020 ba 49 47 95 f5 a6 28 a0 84 2a 57 07 e7 e8 fd 17 .IG...(..*W..... 00:22:29.567 00000030 c4 51 cb f0 79 1e 66 ca 58 60 75 57 57 23 fb 35 .Q..y.f.X`uWW#.5 00:22:29.567 00000040 73 9e f6 39 4b 1e f4 23 f7 73 05 61 64 87 43 09 s..9K..#.s.ad.C. 00:22:29.567 00000050 99 d3 c1 b2 12 4d 9b f4 2f d0 8f 60 2e 21 85 ae .....M../..`.!.. 00:22:29.567 00000060 fc 62 3e 99 f5 e8 09 b5 85 7d c4 9b eb 37 a6 93 .b>......}...7.. 00:22:29.567 00000070 77 78 87 54 c3 75 ee 01 69 17 d5 aa 7f bb aa 64 wx.T.u..i......d 00:22:29.567 00000080 4f 4a 69 7a 59 e5 f3 03 22 5c f3 e8 99 86 e9 b2 OJizY..."\...... 00:22:29.567 00000090 d5 75 ac 72 1a 29 8a c6 78 17 dd 8b da 18 44 dc .u.r.)..x.....D. 00:22:29.567 000000a0 25 6c c7 78 40 77 91 51 cc 96 a7 5c bd db e4 fa %l.x@w.Q...\.... 00:22:29.567 000000b0 5b 3d 6b 6d 2e d1 fc ff 20 d8 00 04 ce 98 26 75 [=km.... .....&u 00:22:29.567 000000c0 c0 32 d3 b2 85 a8 c3 2d aa 6c 30 d1 c7 7d 2c 9f .2.....-.l0..},. 00:22:29.567 000000d0 54 04 4b 74 48 52 4b 6b 1e 4a 95 65 f3 e3 80 48 T.KtHRKk.J.e...H 00:22:29.567 000000e0 59 fc f7 b7 c0 24 b1 cc 52 99 71 a5 5b 61 a3 fa Y....$..R.q.[a.. 00:22:29.567 000000f0 6a 84 40 01 8d 53 eb 29 91 0d cb 7e fc a0 7e 73 j.@..S.)...~..~s 00:22:29.567 00000100 f8 d3 68 e2 3a c3 e2 8f 1f c5 5e 9f 13 31 b2 6d ..h.:.....^..1.m 00:22:29.567 00000110 10 68 63 47 b0 c4 64 99 1a 82 a4 e7 84 7a 97 d8 .hcG..d......z.. 00:22:29.567 00000120 49 c2 14 8b c4 af 9c a5 ab 0f 9d db 5a 2a 60 24 I...........Z*`$ 00:22:29.567 00000130 dc ec 6c e7 67 43 02 e9 6b e8 07 f6 a9 6d 35 66 ..l.gC..k....m5f 00:22:29.567 00000140 fd f8 c5 3e 79 75 80 7a a5 43 f7 27 9b fa c6 87 ...>yu.z.C.'.... 00:22:29.567 00000150 23 06 42 82 0a 35 66 9c 8b fd 63 b2 7b 13 66 e0 #.B..5f...c.{.f. 00:22:29.567 00000160 ed 2d 00 bf e0 2e 45 31 76 4c ae 33 f9 03 b2 cf .-....E1vL.3.... 00:22:29.567 00000170 24 3c 26 23 dc 52 77 8d ca d3 60 54 73 ff 12 43 $<&#.Rw...`Ts..C 00:22:29.567 00000180 2f 43 83 28 fc f2 bc 26 bb 96 f2 e7 da 5e 3d 6a /C.(...&.....^=j 00:22:29.567 00000190 66 0d 2b e0 42 9c e7 05 3d c2 03 dd 4d 6c 94 ba f.+.B...=...Ml.. 00:22:29.567 000001a0 e2 f0 92 e6 d9 50 81 bd d9 e8 05 d2 33 7b 62 79 .....P......3{by 00:22:29.567 000001b0 c0 e7 3a cc de e7 78 7f 92 5c 33 95 ad fd ae 71 ..:...x..\3....q 00:22:29.567 000001c0 5f 72 bc b0 66 1d a1 98 9c a5 c7 5e 86 31 d1 97 _r..f......^.1.. 00:22:29.567 000001d0 b4 cb 2a a4 f6 6b 84 ad 91 e2 59 b3 f8 93 93 02 ..*..k....Y..... 00:22:29.567 000001e0 72 10 6c 22 44 f5 d3 22 f9 c7 f0 f3 df c9 88 f3 r.l"D.."........ 00:22:29.567 000001f0 cb cc 21 b0 8c e9 98 51 d8 72 89 ef ee 41 a0 c7 ..!....Q.r...A.. 00:22:29.567 00000200 43 83 2f 4b 21 82 d2 18 eb 02 a1 b4 2e b8 ba 8b C./K!........... 00:22:29.567 00000210 d6 45 67 07 63 e0 79 8b cc 94 67 44 03 76 a1 22 .Eg.c.y...gD.v." 00:22:29.567 00000220 fe dc f5 46 2b 37 43 0f 24 52 c9 e9 e6 53 5a 94 ...F+7C.$R...SZ. 00:22:29.567 00000230 fd 78 1c f6 35 46 08 cc 31 bc 88 fa d0 80 92 55 .x..5F..1......U 00:22:29.567 00000240 86 d1 50 66 e4 09 4b 9d df f1 65 da 2f 56 61 1c ..Pf..K...e./Va. 00:22:29.567 00000250 b5 7a 99 66 c1 2c cc 85 62 81 f4 c4 69 4a 47 92 .z.f.,..b...iJG. 00:22:29.567 00000260 95 4d 4e c7 48 95 bf 65 d0 67 98 5c c3 91 00 d5 .MN.H..e.g.\.... 00:22:29.567 00000270 e4 9a f3 41 0c e7 76 c6 e9 b6 08 30 2d 26 c2 ab ...A..v....0-&.. 00:22:29.567 00000280 0c 34 1e 67 4b 2a 4a 2e 32 b4 61 01 0c 36 0e 94 .4.gK*J.2.a..6.. 00:22:29.567 00000290 74 12 ae dc 20 13 60 05 30 93 ab 5f 4f 78 6a 27 t... .`.0.._Oxj' 00:22:29.567 000002a0 0a 42 52 da 85 00 b5 53 e4 4c 4e d7 38 89 9b 34 .BR....S.LN.8..4 00:22:29.567 000002b0 d8 bd 7b d9 69 94 df c3 b4 a0 65 28 cd 5a a4 70 ..{.i.....e(.Z.p 00:22:29.567 000002c0 7a 0a 42 ee f0 de 6b 1d 90 2f 8c 4a 97 20 27 33 z.B...k../.J. '3 00:22:29.567 000002d0 2d c9 67 d0 d7 d2 7e 95 65 b1 b5 bd bc 0f 06 99 -.g...~.e....... 00:22:29.567 000002e0 63 07 ec 04 4e d3 04 6e 27 0a e9 c3 8b 48 eb f0 c...N..n'....H.. 00:22:29.567 000002f0 b2 08 c8 64 d7 08 21 df 24 2a 1e 78 3d af e1 8f ...d..!.$*.x=... 00:22:29.567 00000300 87 ee 65 ba 45 8a 4b dc 97 2b 80 dd fb 6e c1 f6 ..e.E.K..+...n.. 00:22:29.567 00000310 f0 8c e0 28 a2 60 02 10 3b 66 5a 0a a8 5c 86 68 ...(.`..;fZ..\.h 00:22:29.567 00000320 83 95 8c 0d 2d 74 1a 10 7d 40 56 2b 33 49 3e 85 ....-t..}@V+3I>. 00:22:29.567 00000330 ed 90 38 98 03 f2 9b df 68 69 e5 0a ff 98 72 b8 ..8.....hi....r. 00:22:29.567 00000340 11 f5 57 7d 7a bf 83 c0 0f 5e 0a d5 da 25 8c b1 ..W}z....^...%.. 00:22:29.567 00000350 85 fe 2f 3b ca 7e 3d 79 c3 f3 dd 23 10 b4 c4 fc ../;.~=y...#.... 00:22:29.567 00000360 4d 83 b9 6b d9 58 56 ca eb 4d 6c 16 38 2b be b3 M..k.XV..Ml.8+.. 00:22:29.567 00000370 ba b7 10 5d af db 82 ce 39 a4 fe 06 9e 32 54 ce ...]....9....2T. 00:22:29.567 00000380 c1 23 54 43 7e 90 b2 04 b2 47 0f 28 25 31 e3 79 .#TC~....G.(%1.y 00:22:29.567 00000390 62 eb 92 f0 4e ce fb 9b c4 e4 b0 db d2 9d 5f 39 b...N........._9 00:22:29.567 000003a0 c6 69 6f 2a 93 c5 3b e0 86 5c 77 b9 2f e6 f1 66 .io*..;..\w./..f 00:22:29.567 000003b0 45 a6 6e e3 bf 18 2e 64 20 68 a0 95 3a 99 e6 39 E.n....d h..:..9 00:22:29.567 000003c0 e0 9c 34 d5 3a 68 45 21 ab 0e bc 93 b7 6a d0 69 ..4.:hE!.....j.i 00:22:29.567 000003d0 58 60 f3 cc e3 9e 1c ba 13 ed ee 21 8e 56 ad 69 X`.........!.V.i 00:22:29.567 000003e0 7a 6e ad c0 41 75 75 53 70 e8 09 a7 4e a3 8e fe zn..AuuSp...N... 00:22:29.567 000003f0 82 1f 1f 99 78 0b 50 75 37 db 08 d0 82 27 6c 5d ....x.Pu7....'l] 00:22:29.567 dh secret: 00:22:29.567 00000000 3b bc e1 d7 83 a2 62 d5 ce 26 23 1d bc 95 3b db ;.....b..&#...;. 00:22:29.567 00000010 99 67 c8 18 05 c4 65 a3 95 be f6 40 09 51 51 ff .g....e....@.QQ. 00:22:29.567 00000020 13 9e e1 ac 50 56 d3 40 10 c6 7e 7c 8b 8f 8e ea ....PV.@..~|.... 00:22:29.567 00000030 b0 5b bb 32 cc ec 79 97 96 a7 e3 a3 94 2d b3 e5 .[.2..y......-.. 00:22:29.567 00000040 2b 2b a8 b6 61 60 94 a8 ef 14 f5 07 03 26 3c 93 ++..a`.......&<. 00:22:29.567 00000050 bd 43 7d 48 fe 36 43 30 d4 72 a9 ab 58 ef a6 6f .C}H.6C0.r..X..o 00:22:29.567 00000060 64 71 df d6 83 85 a7 87 2c 42 84 ee 23 dd 71 1d dq......,B..#.q. 00:22:29.567 00000070 01 6d 2f 86 61 8b 9e 8f f0 dc de dd 2c 84 35 1b .m/.a.......,.5. 00:22:29.567 00000080 93 09 ac 92 eb 5c b1 af f9 92 8d 18 66 23 4c 8e .....\......f#L. 00:22:29.567 00000090 56 8b 92 3f 04 e9 4a ec d7 71 92 71 51 32 b6 1a V..?..J..q.qQ2.. 00:22:29.567 000000a0 77 7b c7 06 55 16 ae 4f 06 f2 15 3a f1 1d 1e 1b w{..U..O...:.... 00:22:29.567 000000b0 55 a8 91 a5 84 f2 a9 34 f6 06 fb 0f a5 84 59 1f U......4......Y. 00:22:29.567 000000c0 84 ec 10 69 6e e4 f0 d3 29 e2 1a 6e d1 58 7b 66 ...in...)..n.X{f 00:22:29.567 000000d0 48 cb 49 ab c8 31 33 0a ac 13 4d 2f cb c3 4d 4d H.I..13...M/..MM 00:22:29.567 000000e0 ad dd 3c 84 f8 c2 24 ee cb 19 69 b1 e3 dd 32 a1 ..<...$...i...2. 00:22:29.567 000000f0 e4 8a a3 14 e7 fe c0 b6 ad 0e 28 15 54 90 02 ed ..........(.T... 00:22:29.567 00000100 2a 27 11 39 96 b8 13 2a f9 48 47 ab 80 5f be 50 *'.9...*.HG.._.P 00:22:29.567 00000110 df 7f e6 32 a3 a9 26 d0 c7 49 72 d1 81 0a ba 52 ...2..&..Ir....R 00:22:29.567 00000120 7a 4f ca 21 c3 00 58 b4 8d 3e a5 4f 4f 93 cd a9 zO.!..X..>.OO... 00:22:29.567 00000130 6a b0 ec 2a 27 2f f7 f1 96 c5 36 6d 47 d7 cd 9b j..*'/....6mG... 00:22:29.568 00000140 12 9d 6a ae a4 e9 e5 2c 24 8e 90 94 e7 5d af 65 ..j....,$....].e 00:22:29.568 00000150 d3 1c 36 3d 37 f7 47 ad 37 c9 93 04 ff e4 63 46 ..6=7.G.7.....cF 00:22:29.568 00000160 56 7c 62 57 ce 51 a1 e7 7e c7 67 ca 86 e8 06 bd V|bW.Q..~.g..... 00:22:29.568 00000170 55 43 b0 c3 28 fe e4 0a 29 3b b6 26 01 6b a0 4a UC..(...);.&.k.J 00:22:29.568 00000180 27 e6 e3 4b 60 c1 e6 98 72 29 91 41 ca f6 fa b4 '..K`...r).A.... 00:22:29.568 00000190 47 10 69 f2 fb 44 03 f9 fd b6 22 a3 ac 9f af 91 G.i..D...."..... 00:22:29.568 000001a0 da 9e b6 de 8f 82 52 73 8e a1 83 78 7e 35 36 b2 ......Rs...x~56. 00:22:29.568 000001b0 a7 84 86 bb 9f d1 12 42 e3 97 28 bb af 1a 8b 0f .......B..(..... 00:22:29.568 000001c0 a0 27 6b 95 d8 29 3f 83 6b 63 1e e5 bf d6 6b f0 .'k..)?.kc....k. 00:22:29.568 000001d0 59 a8 bb ab c1 2b da 3e 11 6c e4 97 db 2c 80 8c Y....+.>.l...,.. 00:22:29.568 000001e0 68 7d e6 eb 38 eb 0d c1 7f 84 ec 8c e6 63 2a 1b h}..8........c*. 00:22:29.568 000001f0 aa 24 b0 91 5f 3f dc fb 5e 22 d1 c4 76 7d 13 00 .$.._?..^"..v}.. 00:22:29.568 00000200 8f 0f 35 3c 16 7b 15 ba 74 9b f0 75 c5 ce 2f 88 ..5<.{..t..u../. 00:22:29.568 00000210 a5 66 7c 2b 28 7b 38 ad 9f 25 89 10 64 bb 61 fd .f|+({8..%..d.a. 00:22:29.568 00000220 58 3c 60 1b 66 e9 1b 2e ac 22 51 91 b8 d7 a9 f4 X<`.f...."Q..... 00:22:29.568 00000230 ff e6 ef 27 c1 db 22 10 76 2b 54 41 c7 a9 ed e3 ...'..".v+TA.... 00:22:29.568 00000240 ff b7 e2 32 84 a9 53 9d 8c 71 09 7e 47 bb 80 07 ...2..S..q.~G... 00:22:29.568 00000250 36 1b a4 ac b3 af 40 22 6b 10 9a 66 5b 5d 34 bf 6.....@"k..f[]4. 00:22:29.568 00000260 5b e0 e4 66 cd 15 26 60 ba d5 fe e3 b2 85 55 cb [..f..&`......U. 00:22:29.568 00000270 55 66 9e 57 15 a0 c0 42 a7 8e d7 3c d2 ef 57 c3 Uf.W...B...<..W. 00:22:29.568 00000280 4f 69 d0 b9 59 6d 23 82 b2 e3 d1 3d b7 6a 6e b1 Oi..Ym#....=.jn. 00:22:29.568 00000290 66 c2 ac a3 65 f9 e3 6f a1 38 a0 cd f3 b9 56 40 f...e..o.8....V@ 00:22:29.568 000002a0 61 71 03 0d e1 36 0a e5 15 f4 52 9d eb 02 41 91 aq...6....R...A. 00:22:29.568 000002b0 38 be f7 57 66 50 9c ea f4 08 62 be 00 a5 75 d4 8..WfP....b...u. 00:22:29.568 000002c0 ee e5 42 89 47 bc 80 73 83 09 4d 49 83 80 6c 61 ..B.G..s..MI..la 00:22:29.568 000002d0 0c 7e 8f 1c ea 94 18 7b 3f c0 61 cd ad fd 4b df .~.....{?.a...K. 00:22:29.568 000002e0 dd 47 7f 65 dc 19 9f a6 e7 39 53 06 1e ea f4 77 .G.e.....9S....w 00:22:29.568 000002f0 b4 9e e7 b4 a7 91 a2 77 e7 c1 12 43 66 7b d8 f0 .......w...Cf{.. 00:22:29.568 00000300 bb ef f6 49 97 f5 7f 15 79 d4 c1 cf 48 c8 03 04 ...I....y...H... 00:22:29.568 00000310 20 84 e0 64 c7 a8 53 40 51 ba d1 08 89 8a 10 89 ..d..S@Q....... 00:22:29.568 00000320 b2 ae 74 7d de c3 a5 50 4e 1e 73 2d 6b bb ad 63 ..t}...PN.s-k..c 00:22:29.568 00000330 fe 02 79 ec f7 3e 58 14 05 61 97 1c f7 7c 23 2c ..y..>X..a...|#, 00:22:29.568 00000340 7a 22 0e 9a ef 94 e2 f5 8f f5 9e 7b 35 c0 ae 4c z".........{5..L 00:22:29.568 00000350 80 d1 55 72 73 1e 26 b7 86 53 07 d6 8d ca d9 9f ..Urs.&..S...... 00:22:29.568 00000360 62 cf be c2 3b ce 00 4a 99 77 83 85 03 20 8d cd b...;..J.w... .. 00:22:29.568 00000370 cb 0e e8 d4 88 1a b2 65 e7 fb 62 c1 d8 ac 00 a0 .......e..b..... 00:22:29.568 00000380 1d 7b 18 5d fd 40 27 d9 f3 57 c9 33 e8 18 05 8e .{.].@'..W.3.... 00:22:29.568 00000390 5d a2 0e 29 d2 5e 61 c7 2e 37 c8 74 d5 f4 d1 6d ]..).^a..7.t...m 00:22:29.568 000003a0 38 1b 70 5a 56 80 52 20 c8 60 8f 62 86 1f 2d 09 8.pZV.R .`.b..-. 00:22:29.568 000003b0 dd 46 ec 9a db 5c 31 0c b9 fd cc 08 a7 52 0e f0 .F...\1......R.. 00:22:29.568 000003c0 f8 d2 02 e9 23 df bb c5 be 32 3f 96 4e da b7 a8 ....#....2?.N... 00:22:29.568 000003d0 4a b5 fd f7 44 d6 ee fb a5 f0 64 35 b5 6e c0 70 J...D.....d5.n.p 00:22:29.568 000003e0 66 61 c7 b4 14 7f b1 8c 26 d8 b5 f2 66 39 f1 b3 fa......&...f9.. 00:22:29.568 000003f0 67 e7 89 dd 9d ae c0 76 73 73 08 07 99 cf 0a 60 g......vss.....` 00:22:29.568 [2024-09-27 13:27:29.722107] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key3, hash=3, dhgroup=5, seq=3775755320, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.568 [2024-09-27 13:27:29.722409] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.568 [2024-09-27 13:27:29.804216] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.568 [2024-09-27 13:27:29.804646] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.568 [2024-09-27 13:27:29.804930] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success2 00:22:29.568 [2024-09-27 13:27:29.805153] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.568 [2024-09-27 13:27:29.957596] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.568 [2024-09-27 13:27:29.957983] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 3 (sha512) 00:22:29.568 [2024-09-27 13:27:29.958121] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 5 (ffdhe8192) 00:22:29.568 [2024-09-27 13:27:29.958236] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.568 [2024-09-27 13:27:29.958464] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.568 ctrlr pubkey: 00:22:29.568 00000000 9d 8b 9f 7d 97 8b 40 5c 57 0b 24 63 f3 4d 78 83 ...}..@\W.$c.Mx. 00:22:29.568 00000010 fa 7f 81 ec ce ce e1 03 b9 78 e3 5f 9f de a8 fe .........x._.... 00:22:29.568 00000020 8f 4b 05 cb cd 2c 4b cf 5f 8b 0a 3e 1d 2b 20 b1 .K...,K._..>.+ . 00:22:29.568 00000030 a7 0b 7f e4 97 0a 99 0b 7c d2 60 bd 93 62 ce 24 ........|.`..b.$ 00:22:29.568 00000040 f4 72 3a 17 c2 d1 b0 16 4e 94 14 77 41 1e 36 53 .r:.....N..wA.6S 00:22:29.568 00000050 85 59 35 46 b4 af df 81 00 03 af 28 e3 4c 14 ff .Y5F.......(.L.. 00:22:29.568 00000060 57 ee 85 2e fa 80 53 dc 88 4e 4b 78 ec 6c db 45 W.....S..NKx.l.E 00:22:29.568 00000070 3a 8d f4 27 a8 5a c0 be a9 5a 72 6a 15 d6 85 98 :..'.Z...Zrj.... 00:22:29.568 00000080 21 d2 a8 00 17 9c 98 d4 29 c7 ad 88 ad 3b e3 9e !.......)....;.. 00:22:29.568 00000090 96 6d 81 32 d7 9a 8f 21 76 fc 9d 8e c6 8f 2f 39 .m.2...!v...../9 00:22:29.568 000000a0 d5 33 d3 09 d9 e2 74 26 e8 51 ba 35 ae 95 a8 86 .3....t&.Q.5.... 00:22:29.568 000000b0 e5 2c 81 b3 7d be 31 68 06 ca 4f aa a5 a6 4f a2 .,..}.1h..O...O. 00:22:29.568 000000c0 9f 8b 73 70 89 43 dd 1a f8 ec c3 ba 3d 3c 8c 4f ..sp.C......=<.O 00:22:29.568 000000d0 6e 78 f0 e2 c3 a7 4d b2 f6 a0 43 31 22 9e 50 dc nx....M...C1".P. 00:22:29.568 000000e0 4e 91 3d 21 a9 a0 06 39 5c dd 9a d2 5d 43 c5 4c N.=!...9\...]C.L 00:22:29.568 000000f0 0a cd cf 59 05 a8 19 2f 7c 8c 84 09 38 65 b1 47 ...Y.../|...8e.G 00:22:29.568 00000100 d0 ac e7 e4 5d b1 56 c0 ba ef b3 d6 2e aa a2 39 ....].V........9 00:22:29.568 00000110 e2 6a 2e dd 43 ae 9c 7e 76 0a 42 93 d9 e9 a2 9e .j..C..~v.B..... 00:22:29.568 00000120 2d 86 c8 7e 16 e6 5f eb a2 78 cb 28 e4 ec 69 24 -..~.._..x.(..i$ 00:22:29.568 00000130 7c e4 61 2f 0a fe 96 df 0e 16 8d 25 76 20 c9 05 |.a/.......%v .. 00:22:29.568 00000140 68 c1 e8 eb d9 27 2d ab c5 e1 4f cb ef 21 24 a1 h....'-...O..!$. 00:22:29.568 00000150 62 3f 78 0c 81 00 07 e4 e2 cc cd 5e ff 55 ff 7d b?x........^.U.} 00:22:29.568 00000160 1f 11 7c 10 70 65 92 23 a2 66 80 a9 8a 82 23 18 ..|.pe.#.f....#. 00:22:29.568 00000170 26 53 34 16 e7 4b c9 df 2e de f2 f8 d7 e9 75 bc &S4..K........u. 00:22:29.568 00000180 10 f5 9b a5 92 9b ea 06 1d 09 97 11 2e 7d 18 da .............}.. 00:22:29.568 00000190 50 85 ee 48 d2 d9 2f 87 a1 2a 14 b2 ba 3c 64 24 P..H../..*....+G4<. 00:22:29.568 000001f0 bd e6 60 2d 4d e3 18 0e 39 50 14 6f d8 d2 1b d9 ..`-M...9P.o.... 00:22:29.568 00000200 25 f9 b1 d1 0d 19 3f 50 d1 40 35 e5 f3 8b 12 9b %.....?P.@5..... 00:22:29.568 00000210 c7 68 6e bc 43 f2 be bf 0a d0 74 84 2e c9 ae 38 .hn.C.....t....8 00:22:29.568 00000220 2a 04 81 4b 41 ef 64 34 f3 c9 ba 4a cb dc 76 69 *..KA.d4...J..vi 00:22:29.568 00000230 93 e1 98 b3 d5 0d f5 f9 cc 4d 85 98 14 a2 15 7d .........M.....} 00:22:29.568 00000240 f0 76 77 21 81 cc 97 9b 36 3e 7b d9 44 ff 63 85 .vw!....6>{.D.c. 00:22:29.568 00000250 68 d7 ab f1 38 b1 bd 99 e6 d0 c1 cb ca 8d c4 ee h...8........... 00:22:29.568 00000260 78 45 1e 94 77 ca 12 60 e9 4e 5f ab 1e fb 2b 03 xE..w..`.N_...+. 00:22:29.568 00000270 3f 41 5e fa 39 a3 a1 76 5d 7a 46 7c 76 03 cb f5 ?A^.9..v]zF|v... 00:22:29.568 00000280 75 f7 68 05 80 7d ad 98 18 23 5d a3 40 f3 98 a6 u.h..}...#].@... 00:22:29.568 00000290 e5 da 7e f1 ec 6b 79 6e c8 72 cf 5c fb 8d e9 58 ..~..kyn.r.\...X 00:22:29.568 000002a0 ef 5b 75 b2 e2 a3 6c 6e 99 9f 5f ec 89 ba 49 64 .[u...ln.._...Id 00:22:29.568 000002b0 a3 db a8 6c d1 82 b5 c5 be 4f 98 a4 d7 07 10 3e ...l.....O.....> 00:22:29.568 000002c0 0b 4f 03 65 53 45 fd 1e d9 f2 04 cf 9a e3 45 61 .O.eSE........Ea 00:22:29.568 000002d0 52 a6 35 36 84 d1 a6 e9 40 e4 e4 66 5a 08 d6 1d R.56....@..fZ... 00:22:29.568 000002e0 aa 87 14 95 ae f9 20 51 c8 4a 75 f4 bf 8a 46 e3 ...... Q.Ju...F. 00:22:29.568 000002f0 86 e8 7c f9 4a 46 d1 da 98 22 d7 30 92 90 d9 34 ..|.JF...".0...4 00:22:29.568 00000300 66 da cf d2 b6 39 46 2a a3 3d 85 50 b3 84 82 c0 f....9F*.=.P.... 00:22:29.568 00000310 88 b5 f5 b4 ee 49 f6 20 58 2a 86 4f a6 26 0f 61 .....I. X*.O.&.a 00:22:29.568 00000320 b9 a6 3c ac 61 48 e7 87 49 8a fb fc 4a 3a 77 26 ..<.aH..I...J:w& 00:22:29.568 00000330 98 fd ed 5b 94 8d 89 5b a1 25 a0 23 0f 91 e1 a2 ...[...[.%.#.... 00:22:29.568 00000340 ad bc 06 15 ed dc ca 23 10 ce 28 71 58 3e eb ac .......#..(qX>.. 00:22:29.568 00000350 b1 80 0f 24 b8 f5 80 cb ef cb 38 11 65 e8 c5 69 ...$......8.e..i 00:22:29.568 00000360 01 0b f7 39 ae 09 b9 03 60 37 8b d1 b6 01 81 7e ...9....`7.....~ 00:22:29.568 00000370 14 23 0d e5 34 e5 78 61 25 b0 c8 0e e3 ab 12 ea .#..4.xa%....... 00:22:29.568 00000380 e1 e7 22 00 dd d7 6b 3e 42 3a b6 c7 36 e9 7f 1f .."...k>B:..6... 00:22:29.568 00000390 7c bb 62 dc c3 af 30 72 9c 77 43 fb 52 f0 65 3f |.b...0r.wC.R.e? 00:22:29.568 000003a0 62 04 a4 ce b7 54 d9 2f 54 b2 13 71 11 b1 8f a0 b....T./T..q.... 00:22:29.568 000003b0 99 8e b0 c7 19 79 81 12 9c eb 1d 65 f8 97 f8 ba .....y.....e.... 00:22:29.568 000003c0 12 54 59 a8 06 c4 82 71 1c 58 3d 1f c7 8c 72 f7 .TY....q.X=...r. 00:22:29.568 000003d0 17 ca e7 a6 4a 5c b6 e4 21 29 65 49 6a 73 38 3d ....J\..!)eIjs8= 00:22:29.568 000003e0 e6 00 fb 6e 93 a3 00 63 39 4a 26 c4 57 9f 15 5d ...n...c9J&.W..] 00:22:29.568 000003f0 4c 06 72 5d e1 bc 6d 38 2e a7 ae 69 19 24 9f dd L.r]..m8...i.$.. 00:22:29.568 host pubkey: 00:22:29.568 00000000 b8 48 62 68 ab 32 33 e4 ce 60 40 b2 90 ad 20 50 .Hbh.23..`@... P 00:22:29.568 00000010 45 56 94 53 1b 19 72 59 a4 4e 4a ba 46 fa 56 d4 EV.S..rY.NJ.F.V. 00:22:29.568 00000020 5f bf cc 8c 06 2f b9 0c 0e 50 4d ec 6e d3 d3 77 _..../...PM.n..w 00:22:29.568 00000030 d2 5b 12 6f 90 63 4d 6b 0a d3 53 1e 02 e0 af ee .[.o.cMk..S..... 00:22:29.568 00000040 48 96 0f 89 d6 ef 1e 17 e0 d2 0d f0 a2 b4 ad d3 H............... 00:22:29.568 00000050 1a 95 f2 91 c0 df f6 73 8b 8e ea 9c 62 39 db b3 .......s....b9.. 00:22:29.568 00000060 02 ca 79 2a 7d a0 9d 16 aa 6c 85 46 0a 01 74 83 ..y*}....l.F..t. 00:22:29.568 00000070 cc 6b ac 5b 09 f8 d0 1b b3 98 47 47 d7 a3 cb ba .k.[......GG.... 00:22:29.568 00000080 5b 4e fe 16 68 64 cb 72 ba 6a 0d 61 c2 41 1e 2e [N..hd.r.j.a.A.. 00:22:29.568 00000090 a3 1f ce 58 d2 fb ec 5d 37 d3 5d 4c c4 fa ac a1 ...X...]7.]L.... 00:22:29.568 000000a0 33 65 7c bb f6 0e e4 7d c6 9d 6e 84 10 20 dc e0 3e|....}..n.. .. 00:22:29.568 000000b0 2b 3a 3e 4e d7 a8 83 8f 34 ff b0 88 38 78 fe c3 +:>N....4...8x.. 00:22:29.568 000000c0 46 fd 4e 10 e4 09 41 6b dc 47 54 e5 73 d6 90 3b F.N...Ak.GT.s..; 00:22:29.568 000000d0 27 d9 38 52 dc 92 f8 91 88 ec 12 cb 30 1c c3 38 '.8R........0..8 00:22:29.568 000000e0 9f ac ed 7c ac ed 38 73 99 fc ba fc 74 24 29 e9 ...|..8s....t$). 00:22:29.568 000000f0 31 81 d3 cb e6 66 3d 22 eb c1 31 85 37 e7 5b 04 1....f="..1.7.[. 00:22:29.568 00000100 96 ba c8 cc 7c 6a 87 61 85 de c3 26 1d 7f 47 ae ....|j.a...&..G. 00:22:29.568 00000110 15 d8 da be 3b ce d6 bd 50 6a 1f a0 a1 27 4e ea ....;...Pj...'N. 00:22:29.568 00000120 69 59 79 b2 44 de f0 d4 c1 1c a5 55 9d 3f fd a4 iYy.D......U.?.. 00:22:29.568 00000130 2c 6f 57 33 e6 11 f1 57 a8 01 8d 05 29 3c 58 6f ,oW3...W....)!....= 00:22:29.568 00000150 4a 63 64 2a b4 2c 17 a6 58 07 44 93 37 e7 88 3d Jcd*.,..X.D.7..= 00:22:29.568 00000160 ab fc 05 67 cb 02 5d 14 1f 97 c5 4c 91 65 92 c8 ...g..]....L.e.. 00:22:29.568 00000170 9d af 6f 1d cd 7a c3 84 30 57 49 25 90 e3 a1 20 ..o..z..0WI%... 00:22:29.568 00000180 90 ca 77 25 11 fd 19 08 75 0c d2 2c 91 bc 50 33 ..w%....u..,..P3 00:22:29.568 00000190 09 46 8f 60 48 7b 18 95 46 ab 73 97 a5 82 a6 9b .F.`H{..F.s..... 00:22:29.568 000001a0 2d e2 d2 1d 6b d8 44 b8 c4 c8 ce f6 69 0d 31 f0 -...k.D.....i.1. 00:22:29.568 000001b0 c7 99 b4 9b cf 08 79 78 7b 42 8c ec ec 3e 58 ec ......yx{B...>X. 00:22:29.568 000001c0 2d 60 8d 31 76 61 e9 3d c1 5a 8e d1 f4 c9 8d ff -`.1va.=.Z...... 00:22:29.568 000001d0 af f7 fd 0c 40 19 6e bf fd 53 9f 2d 5c 80 e4 f3 ....@.n..S.-\... 00:22:29.568 000001e0 c9 13 4c 08 64 db 60 7c 65 2e 66 bb 06 ae 90 0a ..L.d.`|e.f..... 00:22:29.568 000001f0 56 b6 d8 ef a7 e7 eb 31 d1 77 1f 6c 83 1d aa 57 V......1.w.l...W 00:22:29.568 00000200 b4 21 f3 18 f9 7c b5 0c f8 23 cd eb 0f 20 7a b8 .!...|...#... z. 00:22:29.568 00000210 3d 64 0e a6 38 72 25 e2 c5 b1 25 40 6f e7 a1 b4 =d..8r%...%@o... 00:22:29.568 00000220 34 07 75 30 4b f6 79 32 df 05 e9 07 1e 63 51 f8 4.u0K.y2.....cQ. 00:22:29.568 00000230 20 73 56 5b cb 0c c1 8b f4 e7 6c 28 00 00 65 e1 sV[......l(..e. 00:22:29.568 00000240 e0 40 92 6f e3 d8 07 f0 bf 45 58 65 0c 6a 5d 11 .@.o.....EXe.j]. 00:22:29.568 00000250 a3 26 10 17 e3 05 23 fe cd 5d 33 81 f8 08 09 43 .&....#..]3....C 00:22:29.568 00000260 74 c4 ef 36 e2 5e d5 5e cb 8a a7 b1 83 1f 0f d4 t..6.^.^........ 00:22:29.568 00000270 8c ab 80 1e ad 14 84 d1 b1 57 6e cd a2 18 18 e6 .........Wn..... 00:22:29.568 00000280 5f 13 58 78 95 c8 f2 5d 9e 2e 51 db 03 19 b9 86 _.Xx...]..Q..... 00:22:29.568 00000290 b4 53 b5 46 6f 74 94 bc 4d 9d 4d 0a 20 c2 c7 e9 .S.Fot..M.M. ... 00:22:29.568 000002a0 97 00 ad a4 d5 a4 a7 b5 c1 82 59 ef ca 64 7b 7b ..........Y..d{{ 00:22:29.568 000002b0 a3 5f 40 04 73 10 00 fe 47 b8 5d 92 4b f5 52 72 ._@.s...G.].K.Rr 00:22:29.568 000002c0 28 97 04 0f c6 86 a6 f8 13 49 b1 99 07 b1 84 44 (........I.....D 00:22:29.568 000002d0 01 5b 56 87 20 18 45 b5 05 46 97 40 98 3f b7 e3 .[V. .E..F.@.?.. 00:22:29.568 000002e0 0d 36 c5 24 8a e1 f5 49 66 5c 33 b9 7c 88 2b 8d .6.$...If\3.|.+. 00:22:29.568 000002f0 fb c0 b1 c1 d7 b3 df d2 ab 20 e7 49 a0 ba 91 49 ......... .I...I 00:22:29.568 00000300 d5 8b 1d 8c 8e 30 d2 bf d7 b7 42 68 32 67 0f d0 .....0....Bh2g.. 00:22:29.568 00000310 32 ab 57 04 f7 85 9c d6 5b 61 af 0c 23 c4 34 21 2.W.....[a..#.4! 00:22:29.568 00000320 ab 83 6c 12 9b e3 59 5f 64 58 a7 a2 44 0d aa be ..l...Y_dX..D... 00:22:29.568 00000330 87 44 f7 c3 ef e0 c4 64 f7 fb c7 7b a7 cc 62 36 .D.....d...{..b6 00:22:29.568 00000340 a1 b8 37 01 c6 d0 78 a2 20 52 37 94 91 3a e1 db ..7...x. R7..:.. 00:22:29.568 00000350 43 9d 3c 09 99 1b 7d 59 7c 76 d0 a6 5e 29 89 e5 C.<...}Y|v..^).. 00:22:29.568 00000360 e3 6f c4 7a 80 e7 46 06 86 aa eb dc 8f ea d5 7c .o.z..F........| 00:22:29.568 00000370 d3 15 42 34 63 87 b6 1f 66 9d 4c 1a 02 c3 4a b4 ..B4c...f.L...J. 00:22:29.568 00000380 d5 74 5f 50 5e 79 ad 26 7b 39 f4 d4 35 9a da 58 .t_P^y.&{9..5..X 00:22:29.568 00000390 4f ec 83 ae 8d 80 c9 00 0e d2 ba 10 d9 ee 3d 72 O.............=r 00:22:29.568 000003a0 61 f1 a8 af 84 f9 59 df 78 c1 74 19 ec ba 1a fe a.....Y.x.t..... 00:22:29.568 000003b0 72 e6 5e 87 65 1e 9c 78 b3 3e 38 2e 28 a7 14 9e r.^.e..x.>8.(... 00:22:29.568 000003c0 9d f7 5d a9 0d bf 1d 30 0f 9b 2d 96 9c 84 11 4d ..]....0..-....M 00:22:29.568 000003d0 8c f3 c5 0a 93 a9 10 74 1a ca fb 5d 21 f8 9e 85 .......t...]!... 00:22:29.568 000003e0 15 38 8f 6a 66 b5 d3 55 f7 85 38 52 97 c7 8e 59 .8.jf..U..8R...Y 00:22:29.568 000003f0 04 69 36 3e 77 29 8d 6a 48 68 f7 28 5c 80 d3 66 .i6>w).jHh.(\..f 00:22:29.568 dh secret: 00:22:29.568 00000000 dc 8c 6a 16 fe 83 56 70 58 24 30 76 95 46 25 01 ..j...VpX$0v.F%. 00:22:29.568 00000010 86 98 a4 f3 d5 42 33 64 e7 2c 13 d5 62 5d b7 68 .....B3d.,..b].h 00:22:29.568 00000020 01 2d be 1b 51 c0 ed d4 bb 63 cf 74 cb 11 27 ce .-..Q....c.t..'. 00:22:29.568 00000030 24 14 2a 81 9a ff f6 dc 9d c7 de 40 c8 93 d9 e4 $.*........@.... 00:22:29.568 00000040 8b a2 37 33 bd 7e 50 10 c6 92 f8 10 ad e9 69 3e ..73.~P.......i> 00:22:29.568 00000050 93 61 60 ad 01 35 82 66 5f 7d 61 19 83 c0 52 fd .a`..5.f_}a...R. 00:22:29.568 00000060 9a 34 70 63 1a e6 1d aa c0 37 54 fb 16 1b 19 c2 .4pc.....7T..... 00:22:29.568 00000070 d2 8f 9a d3 b3 c9 f7 bc 2c 7e a7 46 75 51 f8 d7 ........,~.FuQ.. 00:22:29.568 00000080 f1 7b ce 48 51 58 f7 05 e6 6c c2 ea ab 84 ae 27 .{.HQX...l.....' 00:22:29.568 00000090 37 ea f6 3e 43 cf a5 5a 6a d4 de 62 04 03 63 c9 7..>C..Zj..b..c. 00:22:29.568 000000a0 cc b3 ee 77 3e e5 76 93 9c 9b 01 7e c2 09 32 a2 ...w>.v....~..2. 00:22:29.569 000000b0 ba 10 12 ad cf ec 41 4d 6b 27 3b 59 e6 ac e2 28 ......AMk';Y...( 00:22:29.569 000000c0 f3 41 77 ec ef 14 41 0c 3c 23 21 fc 37 4f f6 08 .Aw...A.<#!.7O.. 00:22:29.569 000000d0 71 81 67 43 bd 1c 7f 77 5f c5 e7 5f 4f 4b e7 d2 q.gC...w_.._OK.. 00:22:29.569 000000e0 da 83 95 6b 51 ff a7 8f 11 69 ec b8 f0 0d 4b 8f ...kQ....i....K. 00:22:29.569 000000f0 b9 da 7c 23 02 8f b0 8e e1 70 c4 55 ee 32 9d 0c ..|#.....p.U.2.. 00:22:29.569 00000100 d7 f2 91 3c 71 b5 0b 6e 82 4d a0 81 c2 19 2a 48 .....P... 00:22:29.569 000002e0 5f 5f c7 e7 8e d7 b7 74 b9 61 fe a3 88 f8 d0 35 __.....t.a.....5 00:22:29.569 000002f0 42 bd 28 5f ed 10 6d 8f 56 73 58 12 b1 32 9a cd B.(_..m.VsX..2.. 00:22:29.569 00000300 94 7b b0 70 f7 f0 df 7c ea bb 89 ae f7 78 cf ed .{.p...|.....x.. 00:22:29.569 00000310 90 c2 34 95 62 4a c6 41 95 c1 50 d5 5b 2d 05 2e ..4.bJ.A..P.[-.. 00:22:29.569 00000320 54 f8 f2 81 af 53 35 7b 9e 67 55 5d f7 71 4f d1 T....S5{.gU].qO. 00:22:29.569 00000330 70 de aa 2f 1b 9c 28 9f d9 12 95 b2 1c 5e 84 70 p../..(......^.p 00:22:29.569 00000340 a8 a5 14 5e 36 ff 49 dc 5d 37 70 45 97 34 10 d6 ...^6.I.]7pE.4.. 00:22:29.569 00000350 6a 36 f6 f7 60 36 00 d8 67 d1 76 8f bd 82 8c f8 j6..`6..g.v..... 00:22:29.569 00000360 ee 0e 16 8a fd bc db 4a d0 6f 2f 20 22 a0 07 23 .......J.o/ "..# 00:22:29.569 00000370 cf b7 0d 25 54 cd b6 dd 85 54 b9 5f a7 99 ec 35 ...%T....T._...5 00:22:29.569 00000380 b5 f0 d1 33 68 71 0e 65 17 10 e9 6f 19 21 30 26 ...3hq.e...o.!0& 00:22:29.569 00000390 66 0f a4 15 a6 dd 98 bf e7 91 d0 e3 89 21 5e df f............!^. 00:22:29.569 000003a0 20 a8 ca a8 4b 95 aa 92 c5 a7 19 7f c1 0b 03 71 ...K..........q 00:22:29.569 000003b0 b8 93 f2 08 8c cb 46 84 2f 0b b7 a2 c3 a2 7f 5a ......F./......Z 00:22:29.569 000003c0 37 16 cc ae 38 d8 bd 49 b2 f5 25 da ad 1c bc 09 7...8..I..%..... 00:22:29.569 000003d0 dd 8d 6c 2c d2 1e b3 d4 d6 04 23 d7 4c 53 f1 60 ..l,......#.LS.` 00:22:29.569 000003e0 28 2c 55 86 2b cb 37 8b c6 a9 5f 78 9b 74 22 d0 (,U.+.7..._x.t". 00:22:29.569 000003f0 ba bf 27 c4 a1 d3 fe 8f b8 18 70 07 c2 a0 9d 0c ..'.......p..... 00:22:29.569 [2024-09-27 13:27:30.119757] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key4, hash=3, dhgroup=5, seq=3775755321, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.569 [2024-09-27 13:27:30.120016] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.569 [2024-09-27 13:27:30.223753] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.569 [2024-09-27 13:27:30.224040] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] authentication completed successfully 00:22:29.569 [2024-09-27 13:27:30.224143] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.569 [2024-09-27 13:27:30.275312] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: negotiate 00:22:29.569 [2024-09-27 13:27:30.275461] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] digest: 3 (sha512) 00:22:29.569 [2024-09-27 13:27:30.275569] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] dhgroup: 5 (ffdhe8192) 00:22:29.569 [2024-09-27 13:27:30.275899] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-negotiate 00:22:29.569 [2024-09-27 13:27:30.276221] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-challenge 00:22:29.569 ctrlr pubkey: 00:22:29.569 00000000 9d 8b 9f 7d 97 8b 40 5c 57 0b 24 63 f3 4d 78 83 ...}..@\W.$c.Mx. 00:22:29.569 00000010 fa 7f 81 ec ce ce e1 03 b9 78 e3 5f 9f de a8 fe .........x._.... 00:22:29.569 00000020 8f 4b 05 cb cd 2c 4b cf 5f 8b 0a 3e 1d 2b 20 b1 .K...,K._..>.+ . 00:22:29.569 00000030 a7 0b 7f e4 97 0a 99 0b 7c d2 60 bd 93 62 ce 24 ........|.`..b.$ 00:22:29.569 00000040 f4 72 3a 17 c2 d1 b0 16 4e 94 14 77 41 1e 36 53 .r:.....N..wA.6S 00:22:29.569 00000050 85 59 35 46 b4 af df 81 00 03 af 28 e3 4c 14 ff .Y5F.......(.L.. 00:22:29.569 00000060 57 ee 85 2e fa 80 53 dc 88 4e 4b 78 ec 6c db 45 W.....S..NKx.l.E 00:22:29.569 00000070 3a 8d f4 27 a8 5a c0 be a9 5a 72 6a 15 d6 85 98 :..'.Z...Zrj.... 00:22:29.569 00000080 21 d2 a8 00 17 9c 98 d4 29 c7 ad 88 ad 3b e3 9e !.......)....;.. 00:22:29.569 00000090 96 6d 81 32 d7 9a 8f 21 76 fc 9d 8e c6 8f 2f 39 .m.2...!v...../9 00:22:29.569 000000a0 d5 33 d3 09 d9 e2 74 26 e8 51 ba 35 ae 95 a8 86 .3....t&.Q.5.... 00:22:29.569 000000b0 e5 2c 81 b3 7d be 31 68 06 ca 4f aa a5 a6 4f a2 .,..}.1h..O...O. 00:22:29.569 000000c0 9f 8b 73 70 89 43 dd 1a f8 ec c3 ba 3d 3c 8c 4f ..sp.C......=<.O 00:22:29.569 000000d0 6e 78 f0 e2 c3 a7 4d b2 f6 a0 43 31 22 9e 50 dc nx....M...C1".P. 00:22:29.569 000000e0 4e 91 3d 21 a9 a0 06 39 5c dd 9a d2 5d 43 c5 4c N.=!...9\...]C.L 00:22:29.569 000000f0 0a cd cf 59 05 a8 19 2f 7c 8c 84 09 38 65 b1 47 ...Y.../|...8e.G 00:22:29.569 00000100 d0 ac e7 e4 5d b1 56 c0 ba ef b3 d6 2e aa a2 39 ....].V........9 00:22:29.569 00000110 e2 6a 2e dd 43 ae 9c 7e 76 0a 42 93 d9 e9 a2 9e .j..C..~v.B..... 00:22:29.569 00000120 2d 86 c8 7e 16 e6 5f eb a2 78 cb 28 e4 ec 69 24 -..~.._..x.(..i$ 00:22:29.569 00000130 7c e4 61 2f 0a fe 96 df 0e 16 8d 25 76 20 c9 05 |.a/.......%v .. 00:22:29.569 00000140 68 c1 e8 eb d9 27 2d ab c5 e1 4f cb ef 21 24 a1 h....'-...O..!$. 00:22:29.569 00000150 62 3f 78 0c 81 00 07 e4 e2 cc cd 5e ff 55 ff 7d b?x........^.U.} 00:22:29.569 00000160 1f 11 7c 10 70 65 92 23 a2 66 80 a9 8a 82 23 18 ..|.pe.#.f....#. 00:22:29.569 00000170 26 53 34 16 e7 4b c9 df 2e de f2 f8 d7 e9 75 bc &S4..K........u. 00:22:29.569 00000180 10 f5 9b a5 92 9b ea 06 1d 09 97 11 2e 7d 18 da .............}.. 00:22:29.569 00000190 50 85 ee 48 d2 d9 2f 87 a1 2a 14 b2 ba 3c 64 24 P..H../..*....+G4<. 00:22:29.569 000001f0 bd e6 60 2d 4d e3 18 0e 39 50 14 6f d8 d2 1b d9 ..`-M...9P.o.... 00:22:29.569 00000200 25 f9 b1 d1 0d 19 3f 50 d1 40 35 e5 f3 8b 12 9b %.....?P.@5..... 00:22:29.569 00000210 c7 68 6e bc 43 f2 be bf 0a d0 74 84 2e c9 ae 38 .hn.C.....t....8 00:22:29.569 00000220 2a 04 81 4b 41 ef 64 34 f3 c9 ba 4a cb dc 76 69 *..KA.d4...J..vi 00:22:29.569 00000230 93 e1 98 b3 d5 0d f5 f9 cc 4d 85 98 14 a2 15 7d .........M.....} 00:22:29.569 00000240 f0 76 77 21 81 cc 97 9b 36 3e 7b d9 44 ff 63 85 .vw!....6>{.D.c. 00:22:29.569 00000250 68 d7 ab f1 38 b1 bd 99 e6 d0 c1 cb ca 8d c4 ee h...8........... 00:22:29.569 00000260 78 45 1e 94 77 ca 12 60 e9 4e 5f ab 1e fb 2b 03 xE..w..`.N_...+. 00:22:29.569 00000270 3f 41 5e fa 39 a3 a1 76 5d 7a 46 7c 76 03 cb f5 ?A^.9..v]zF|v... 00:22:29.569 00000280 75 f7 68 05 80 7d ad 98 18 23 5d a3 40 f3 98 a6 u.h..}...#].@... 00:22:29.569 00000290 e5 da 7e f1 ec 6b 79 6e c8 72 cf 5c fb 8d e9 58 ..~..kyn.r.\...X 00:22:29.569 000002a0 ef 5b 75 b2 e2 a3 6c 6e 99 9f 5f ec 89 ba 49 64 .[u...ln.._...Id 00:22:29.569 000002b0 a3 db a8 6c d1 82 b5 c5 be 4f 98 a4 d7 07 10 3e ...l.....O.....> 00:22:29.569 000002c0 0b 4f 03 65 53 45 fd 1e d9 f2 04 cf 9a e3 45 61 .O.eSE........Ea 00:22:29.569 000002d0 52 a6 35 36 84 d1 a6 e9 40 e4 e4 66 5a 08 d6 1d R.56....@..fZ... 00:22:29.569 000002e0 aa 87 14 95 ae f9 20 51 c8 4a 75 f4 bf 8a 46 e3 ...... Q.Ju...F. 00:22:29.569 000002f0 86 e8 7c f9 4a 46 d1 da 98 22 d7 30 92 90 d9 34 ..|.JF...".0...4 00:22:29.569 00000300 66 da cf d2 b6 39 46 2a a3 3d 85 50 b3 84 82 c0 f....9F*.=.P.... 00:22:29.569 00000310 88 b5 f5 b4 ee 49 f6 20 58 2a 86 4f a6 26 0f 61 .....I. X*.O.&.a 00:22:29.569 00000320 b9 a6 3c ac 61 48 e7 87 49 8a fb fc 4a 3a 77 26 ..<.aH..I...J:w& 00:22:29.569 00000330 98 fd ed 5b 94 8d 89 5b a1 25 a0 23 0f 91 e1 a2 ...[...[.%.#.... 00:22:29.569 00000340 ad bc 06 15 ed dc ca 23 10 ce 28 71 58 3e eb ac .......#..(qX>.. 00:22:29.569 00000350 b1 80 0f 24 b8 f5 80 cb ef cb 38 11 65 e8 c5 69 ...$......8.e..i 00:22:29.569 00000360 01 0b f7 39 ae 09 b9 03 60 37 8b d1 b6 01 81 7e ...9....`7.....~ 00:22:29.569 00000370 14 23 0d e5 34 e5 78 61 25 b0 c8 0e e3 ab 12 ea .#..4.xa%....... 00:22:29.569 00000380 e1 e7 22 00 dd d7 6b 3e 42 3a b6 c7 36 e9 7f 1f .."...k>B:..6... 00:22:29.569 00000390 7c bb 62 dc c3 af 30 72 9c 77 43 fb 52 f0 65 3f |.b...0r.wC.R.e? 00:22:29.569 000003a0 62 04 a4 ce b7 54 d9 2f 54 b2 13 71 11 b1 8f a0 b....T./T..q.... 00:22:29.569 000003b0 99 8e b0 c7 19 79 81 12 9c eb 1d 65 f8 97 f8 ba .....y.....e.... 00:22:29.569 000003c0 12 54 59 a8 06 c4 82 71 1c 58 3d 1f c7 8c 72 f7 .TY....q.X=...r. 00:22:29.569 000003d0 17 ca e7 a6 4a 5c b6 e4 21 29 65 49 6a 73 38 3d ....J\..!)eIjs8= 00:22:29.569 000003e0 e6 00 fb 6e 93 a3 00 63 39 4a 26 c4 57 9f 15 5d ...n...c9J&.W..] 00:22:29.569 000003f0 4c 06 72 5d e1 bc 6d 38 2e a7 ae 69 19 24 9f dd L.r]..m8...i.$.. 00:22:29.569 host pubkey: 00:22:29.569 00000000 41 24 d2 eb 13 32 36 80 04 e8 5f 0c 7f 27 b6 f2 A$...26..._..'.. 00:22:29.569 00000010 a4 fd f9 61 6b 1e 0e f4 39 b7 fd 00 91 98 de 83 ...ak...9....... 00:22:29.569 00000020 96 d0 2b cf 3e ba 1f c5 1b a2 10 c5 e5 87 1c f0 ..+.>........... 00:22:29.569 00000030 c2 8c f8 2c 8e 39 2c de f3 e0 6f ea 2d af 89 9d ...,.9,...o.-... 00:22:29.569 00000040 6d 60 dd 7f 3b c1 b5 7c 19 6c 6c a9 e2 a7 e6 41 m`..;..|.ll....A 00:22:29.569 00000050 56 be 46 88 8f bc a5 90 34 ed e6 a1 67 e4 22 bc V.F.....4...g.". 00:22:29.569 00000060 bf 99 bb d3 3d 1e 38 97 fe e6 af 75 2d 86 c5 24 ....=.8....u-..$ 00:22:29.569 00000070 17 24 c6 86 96 98 e5 2d c9 e6 24 61 af eb 02 b5 .$.....-..$a.... 00:22:29.569 00000080 20 e7 e4 b8 04 71 b6 c8 2d f1 6d 4b 6e 34 62 6e ....q..-.mKn4bn 00:22:29.569 00000090 97 eb a6 af 25 72 94 57 57 89 91 00 ee ec e2 e9 ....%r.WW....... 00:22:29.569 000000a0 a4 50 38 7a d4 b5 f9 3c 76 bd 81 b3 73 ff d2 7b .P8z.......I.. 00:22:29.569 000003a0 98 a6 ed bf dd 52 28 c6 d6 ad 2c c3 19 ac a6 7c .....R(...,....| 00:22:29.569 000003b0 72 ea 13 67 26 e6 4f c7 80 cb 58 bf 97 f0 78 f3 r..g&.O...X...x. 00:22:29.569 000003c0 22 5d 65 9c 3a ee b2 fc 45 7e 0a 7d 8c 44 df 49 "]e.:...E~.}.D.I 00:22:29.569 000003d0 90 28 e3 cc b4 78 79 45 0e f3 c6 a7 cb a3 68 02 .(...xyE......h. 00:22:29.569 000003e0 3e 7e fb 84 8b 8e 0d d4 75 92 b8 d8 ce b3 26 82 >~......u.....&. 00:22:29.569 000003f0 e0 87 c7 a1 9a 59 cd 97 60 ce d6 4a 15 3a 1c ed .....Y..`..J.:.. 00:22:29.569 dh secret: 00:22:29.569 00000000 68 88 eb 19 df e2 08 33 e8 46 ef 6f 5c 12 0a 7b h......3.F.o\..{ 00:22:29.569 00000010 48 b6 a1 0e 30 a5 38 5e fc bf 7b 1a 7f bc 9b a1 H...0.8^..{..... 00:22:29.569 00000020 e6 73 ed 1a 85 bd 2f 63 27 1f a4 96 79 27 14 25 .s..../c'...y'.% 00:22:29.569 00000030 cf a5 07 57 67 91 76 0c 01 d9 93 55 eb ce 40 79 ...Wg.v....U..@y 00:22:29.569 00000040 84 02 ff 1c e9 1b 00 7b 1b 4a d8 eb 27 7c bb cb .......{.J..'|.. 00:22:29.569 00000050 b2 10 69 de 3d 43 be 19 e8 5a c6 c1 05 1e 88 c8 ..i.=C...Z...... 00:22:29.569 00000060 a0 8f 34 73 7a d0 8c d6 b8 07 34 37 b1 bc 34 f4 ..4sz.....47..4. 00:22:29.569 00000070 fb 00 94 32 62 0b c9 18 79 24 13 8b 22 60 1f 18 ...2b...y$.."`.. 00:22:29.569 00000080 0a 70 ec ac 9f c3 82 9f 49 99 db 5d d2 5c 3a fd .p......I..].\:. 00:22:29.569 00000090 ae 5d af 97 ad d8 1d 9b f9 48 bd 64 0f ec 32 2f .].......H.d..2/ 00:22:29.569 000000a0 99 c2 93 c4 9a 62 d1 37 17 f5 54 ba b8 dc e7 0c .....b.7..T..... 00:22:29.569 000000b0 41 2f 0f 60 07 92 90 a1 d4 ef 21 d8 95 3b 5c 6a A/.`......!..;\j 00:22:29.569 000000c0 51 9e 74 b0 a1 87 ac 63 92 6c fd ed c6 1b f4 25 Q.t....c.l.....% 00:22:29.569 000000d0 88 33 32 06 db 2f c6 bc 7f 72 b0 46 5a 48 6b 75 .32../...r.FZHku 00:22:29.569 000000e0 4e a3 0b 53 c6 a3 60 9b 03 51 a1 4c ea 94 70 5c N..S..`..Q.L..p\ 00:22:29.569 000000f0 d9 a5 8b 2f fc e2 2c ba 0c 2f 23 15 68 71 9e 03 .../..,../#.hq.. 00:22:29.569 00000100 2e 83 16 a7 90 4e 26 71 2e b3 7e 31 7b be e3 4a .....N&q..~1{..J 00:22:29.569 00000110 e6 fd a0 7f 73 37 02 d0 f8 5e 43 48 01 2f 28 0a ....s7...^CH./(. 00:22:29.569 00000120 5e 06 78 ad 6f 5f 69 f9 fd 09 49 e3 4a 51 1f f1 ^.x.o_i...I.JQ.. 00:22:29.569 00000130 bb 85 f5 c3 4a bb cf b3 07 96 32 e2 14 98 9f dc ....J.....2..... 00:22:29.569 00000140 b2 76 67 b7 22 82 34 76 2d f9 6e f7 40 23 6d 71 .vg.".4v-.n.@#mq 00:22:29.570 00000150 0d b2 6e b5 c3 c3 82 f3 4f 82 e8 0c 09 52 bc 6f ..n.....O....R.o 00:22:29.570 00000160 aa e1 70 e5 f7 35 1f 4b 98 98 6c cc ff 2c 49 a5 ..p..5.K..l..,I. 00:22:29.570 00000170 9e 71 06 de b1 5f d6 26 43 ff f6 b6 73 6c c3 64 .q..._.&C...sl.d 00:22:29.570 00000180 95 68 57 e5 0f ec ca 83 b8 eb 18 42 5c 2b 98 1f .hW........B\+.. 00:22:29.570 00000190 07 d3 e7 a8 ea e8 84 37 7a 9c 29 50 21 40 19 fb .......7z.)P!@.. 00:22:29.570 000001a0 d1 a6 d3 40 a8 14 33 de 8f 67 a4 ee 94 a2 b0 f6 ...@..3..g...... 00:22:29.570 000001b0 be a2 dc 21 66 b1 66 c6 84 dd bf 1a 90 a5 9c 51 ...!f.f........Q 00:22:29.570 000001c0 a4 2e 31 93 7f 12 e3 76 15 29 4d 38 08 35 b3 fd ..1....v.)M8.5.. 00:22:29.570 000001d0 87 bd 64 ca c6 a3 2e 16 14 35 f0 bc a3 58 20 73 ..d......5...X s 00:22:29.570 000001e0 38 dc 99 e2 6a 65 36 d2 f3 03 83 7c b4 87 7d 98 8...je6....|..}. 00:22:29.570 000001f0 60 0d b3 48 eb 3b 58 de 7b 7d 1f 4c d1 b9 60 a3 `..H.;X.{}.L..`. 00:22:29.570 00000200 8b 64 47 b3 ca 46 a0 7d 0f 68 7f 96 06 fb 01 9e .dG..F.}.h...... 00:22:29.570 00000210 89 10 f3 9f 4d 77 14 e3 52 9b 0b e9 48 98 1f a0 ....Mw..R...H... 00:22:29.570 00000220 a4 5b bd b7 90 4c 23 28 5e 44 12 f3 02 52 3e cb .[...L#(^D...R>. 00:22:29.570 00000230 59 85 f7 ed f8 8b 3d 16 9a 94 07 52 05 14 d4 3e Y.....=....R...> 00:22:29.570 00000240 ab ef 8b 74 85 b0 8a 09 8c 49 35 c2 30 69 46 4f ...t.....I5.0iFO 00:22:29.570 00000250 d1 bf 93 4e 1e 01 0a 26 df 78 ce f0 24 dc 98 f1 ...N...&.x..$... 00:22:29.570 00000260 f9 86 2a 91 35 b9 6e f9 4b 14 2a 98 f9 9c e3 70 ..*.5.n.K.*....p 00:22:29.570 00000270 eb ff d9 d9 fd 24 d6 2d b2 73 8c 51 54 e8 7e 09 .....$.-.s.QT.~. 00:22:29.570 00000280 27 2f be 13 24 ee 66 7d 47 cd 41 75 ba 1a d6 44 '/..$.f}G.Au...D 00:22:29.570 00000290 3d df 5f cc f6 63 a9 30 be 90 8b 21 8f ac a6 26 =._..c.0...!...& 00:22:29.570 000002a0 3a 59 1b 8c 92 d5 65 54 b6 71 a4 8c d8 a9 b8 d7 :Y....eT.q...... 00:22:29.570 000002b0 5f 81 4d 63 a4 28 f5 8d d9 d4 6a dd fa 41 11 fe _.Mc.(....j..A.. 00:22:29.570 000002c0 fb 19 9c d9 aa d6 32 aa 9f 26 5f 52 a5 69 c1 41 ......2..&_R.i.A 00:22:29.570 000002d0 d8 c9 9d 00 89 e4 2f 17 a9 c3 ff 1d 8a 75 69 d8 ....../......ui. 00:22:29.570 000002e0 9e 35 1f 9b 76 14 49 1c c6 84 b5 0a 91 36 1f 1d .5..v.I......6.. 00:22:29.570 000002f0 c4 72 40 a1 ed 98 59 57 47 6b 1b 0b 97 61 66 fd .r@...YWGk...af. 00:22:29.570 00000300 3a a4 d5 44 d2 0a 52 87 80 40 ef 7c d6 77 34 d9 :..D..R..@.|.w4. 00:22:29.570 00000310 bc 6d 5c cc 21 0d e8 b6 03 98 0d 86 af 78 3d bc .m\.!........x=. 00:22:29.570 00000320 76 16 6f d1 48 0a d7 89 c3 cb cb 94 0c 8a 52 e8 v.o.H.........R. 00:22:29.570 00000330 2c d3 69 d7 5a b1 24 09 0a 54 e3 07 39 a9 13 b6 ,.i.Z.$..T..9... 00:22:29.570 00000340 47 43 a5 6f f2 15 47 46 0c 63 89 4d 89 88 1b 33 GC.o..GF.c.M...3 00:22:29.570 00000350 bb d9 96 1d af 3c 69 fd 9c f7 b2 70 80 09 89 67 ..........0....C 00:22:29.570 [2024-09-27 13:27:30.532169] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] key=key4, hash=3, dhgroup=5, seq=3775755322, tid=1, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=64 00:22:29.570 [2024-09-27 13:27:30.532462] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-reply 00:22:29.570 [2024-09-27 13:27:30.647930] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: await-success1 00:22:29.570 [2024-09-27 13:27:30.648335] nvme_auth.c:1179:nvme_fabric_qpair_authenticate_poll: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] authentication completed successfully 00:22:29.570 [2024-09-27 13:27:30.648458] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:1] auth state: done 00:22:29.570 [2024-09-27 13:27:30.743749] nvme_auth.c:1230:nvme_fabric_qpair_authenticate_async: *ERROR*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] missing DH-HMAC-CHAP key 00:22:29.570 [2024-09-27 13:27:30.743915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b6eaa0 00:22:29.570 [2024-09-27 13:27:30.744651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6eaa0 (9): Bad file descriptor 00:22:29.570 [2024-09-27 13:27:30.745627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2024-02.io.spdk:cnode0] Ctrlr is in error state 00:22:29.570 [2024-09-27 13:27:30.745769] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.1 00:22:29.570 [2024-09-27 13:27:30.745864] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.1 trsvcid=4420 subnqn=nqn.2024-02.io.spdk:cnode0, Operation not permitted 00:22:29.570 [2024-09-27 13:27:30.746392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2024-02.io.spdk:cnode0] in failed state. 00:22:29.570 [2024-09-27 13:27:30.824545] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.570 [2024-09-27 13:27:30.824836] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.570 [2024-09-27 13:27:30.824914] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.570 [2024-09-27 13:27:30.825048] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.570 [2024-09-27 13:27:30.825296] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.570 ctrlr pubkey: 00:22:29.570 00000000 4e 2a d7 af a6 3f 3c 2b 06 e8 c9 18 9f e0 18 35 N*...?<+.......5 00:22:29.570 00000010 1a 12 fc ca da 48 d2 59 cb 18 4a b4 62 f3 c9 e5 .....H.Y..J.b... 00:22:29.570 00000020 3d f5 f9 34 07 4f 1b 9d 9f a8 ca 1d 49 15 71 dd =..4.O......I.q. 00:22:29.570 00000030 65 29 38 1b 71 f5 a3 18 fe e8 5e 85 60 bb 37 8d e)8.q.....^.`.7. 00:22:29.570 00000040 2d f1 a0 3a 8b 6d df 50 e9 58 83 53 78 d1 92 e8 -..:.m.P.X.Sx... 00:22:29.570 00000050 4e 61 f6 12 8f 16 ca 9f 50 bf 66 fe 30 71 bd bd Na......P.f.0q.. 00:22:29.570 00000060 e8 15 25 42 a1 62 cf 43 c6 70 11 ba 49 e9 64 c7 ..%B.b.C.p..I.d. 00:22:29.570 00000070 11 3e 3e 46 02 37 03 b2 a5 a1 b5 79 61 ed 07 1c .>>F.7.....ya... 00:22:29.570 00000080 01 30 db 4f 72 d1 9d cd 64 4d ba ab 42 16 30 0c .0.Or...dM..B.0. 00:22:29.570 00000090 87 8b 6e 8a 20 75 97 31 74 56 9a 1c 96 39 ad e9 ..n. u.1tV...9.. 00:22:29.570 000000a0 47 ad 61 83 70 f7 aa dd ea 44 63 15 30 cc 98 e0 G.a.p....Dc.0... 00:22:29.570 000000b0 1f eb e5 9e eb 8e e4 02 7a 6d 65 ec f6 9e d1 ad ........zme..... 00:22:29.570 000000c0 2a 87 91 02 d9 f2 95 38 66 0c f1 64 99 21 1e aa *......8f..d.!.. 00:22:29.570 000000d0 d1 37 75 82 2b 1a 36 ac e8 1d e0 8a 25 5a eb ee .7u.+.6.....%Z.. 00:22:29.570 000000e0 64 1c b2 84 1c 6d 52 b3 b3 0d 4e a4 f9 15 ad 94 d....mR...N..... 00:22:29.570 000000f0 d5 4a ec ad 94 7c c4 59 b8 d2 2b 90 03 6d c1 12 .J...|.Y..+..m.. 00:22:29.570 host pubkey: 00:22:29.570 00000000 17 20 61 07 cd c8 52 e3 a1 c0 a2 71 7b fa cb 83 . a...R....q{... 00:22:29.570 00000010 14 ce 8b 4b bd 13 1f 25 2b cd 84 c1 64 12 7a 0e ...K...%+...d.z. 00:22:29.570 00000020 69 0c cd 9f 25 fc c5 1c f2 db 0f 6f 52 f9 e1 4f i...%......oR..O 00:22:29.570 00000030 9c d7 09 ab 06 af ee c3 0e a3 93 b1 b2 97 df ab ................ 00:22:29.570 00000040 d1 d3 2e 83 ac c8 78 b4 8e 74 06 3a f9 70 fb c4 ......x..t.:.p.. 00:22:29.570 00000050 c5 c8 1c d0 ae d6 28 67 9f 5d 95 3f d4 f0 35 a1 ......(g.].?..5. 00:22:29.570 00000060 39 63 d8 a7 e4 03 d6 fd 95 a6 b5 05 80 df 41 47 9c............AG 00:22:29.570 00000070 a8 e9 bb 11 cc da fb 4b 44 d2 bc 7d a5 15 4e 09 .......KD..}..N. 00:22:29.570 00000080 7a 99 f9 f6 48 02 43 89 fe b8 4e 34 90 d0 69 50 z...H.C...N4..iP 00:22:29.570 00000090 83 58 a5 55 dd 00 14 66 1e 17 48 1e f8 08 63 ce .X.U...f..H...c. 00:22:29.570 000000a0 c4 7b e7 05 5a 37 9f 32 ba 92 78 c6 83 d7 5f af .{..Z7.2..x..._. 00:22:29.570 000000b0 49 4f 86 ae b8 1e 96 8e ba 5d 5c bb 20 a9 9e 0b IO.......]\. ... 00:22:29.570 000000c0 d6 54 b0 f5 6b 4f 6c b6 a7 ea 78 9e 30 38 58 af .T..kOl...x.08X. 00:22:29.570 000000d0 10 86 82 0f d8 4b be 80 79 6a bd df 43 53 5a d6 .....K..yj..CSZ. 00:22:29.570 000000e0 7b 76 5c 75 c2 12 17 9f 9a 6c a4 bc 64 40 31 03 {v\u.....l..d@1. 00:22:29.570 000000f0 2d d7 be 19 12 05 bc ed b3 ce fa 1f 3d 26 17 6e -...........=&.n 00:22:29.570 dh secret: 00:22:29.570 00000000 85 b0 78 28 1a d2 fc 72 8d 86 0d 97 a5 7d 6b 87 ..x(...r.....}k. 00:22:29.570 00000010 56 ab 87 30 51 cc 36 0a 97 ce 74 96 fe 14 be 8e V..0Q.6...t..... 00:22:29.570 00000020 fa 23 0b 49 0d aa 61 f4 8c 85 99 7c d1 1e 30 90 .#.I..a....|..0. 00:22:29.570 00000030 d0 34 37 92 4a 15 f1 32 8c e8 4c df 79 c3 62 69 .47.J..2..L.y.bi 00:22:29.570 00000040 78 af 07 46 26 59 7c 0f 51 9b e6 aa 71 6c 60 d0 x..F&Y|.Q...ql`. 00:22:29.570 00000050 21 5d e4 18 87 7a a6 5e 5a b8 07 22 79 40 eb 1a !]...z.^Z.."y@.. 00:22:29.570 00000060 72 2e 58 bb c5 40 43 a4 2b 18 23 b8 f0 94 ae 54 r.X..@C.+.#....T 00:22:29.570 00000070 e4 d6 e3 ff 14 62 b6 c1 2d c8 d2 59 d1 04 82 0e .....b..-..Y.... 00:22:29.570 00000080 bf 26 24 c3 4c ab f8 ab 68 46 00 e6 1c e8 08 99 .&$.L...hF...... 00:22:29.570 00000090 1d 97 8a 77 0e 3e 14 f5 b0 d0 65 2d a8 a3 68 e1 ...w.>....e-..h. 00:22:29.570 000000a0 bd ef 88 3d 16 78 b2 12 d9 1d df 9b d3 c0 43 ba ...=.x........C. 00:22:29.570 000000b0 0d bd 93 b5 a6 b8 33 16 b9 02 a9 3a f4 dc 71 94 ......3....:..q. 00:22:29.570 000000c0 3c 3d 88 ef 7c 49 0c 73 11 b8 2c 31 1c 3e 8b ad <=..|I.s..,1.>.. 00:22:29.570 000000d0 ec 1d 0e c2 23 4a 23 a2 f3 0e 8d 5d be 37 20 76 ....#J#....].7 v 00:22:29.570 000000e0 07 66 f6 d0 52 8e 3e 39 97 6d d9 1a 8a e4 1a 0a .f..R.>9.m...... 00:22:29.570 000000f0 36 98 f4 15 a2 0c d1 fc aa 59 59 b2 f1 cc 9c f4 6........YY..... 00:22:29.570 [2024-09-27 13:27:30.832735] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key2, hash=1, dhgroup=1, seq=3775755323, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.570 [2024-09-27 13:27:30.832927] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.570 [2024-09-27 13:27:30.838021] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.570 [2024-09-27 13:27:30.840003] nvme_auth.c: 764:nvme_auth_check_message: *ERROR*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] received AUTH_failure1: rc=1, rce=1 (authentication failed) 00:22:29.570 [2024-09-27 13:27:30.840128] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.571 [2024-09-27 13:27:30.840274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b6eaa0 00:22:29.571 [2024-09-27 13:27:30.840394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6eaa0 (9): Bad file descriptor 00:22:29.571 [2024-09-27 13:27:30.841364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2024-02.io.spdk:cnode0] Ctrlr is in error state 00:22:29.571 [2024-09-27 13:27:30.841497] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.1 00:22:29.571 [2024-09-27 13:27:30.841549] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.1 trsvcid=4420 subnqn=nqn.2024-02.io.spdk:cnode0, Operation not permitted 00:22:29.571 [2024-09-27 13:27:30.841636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2024-02.io.spdk:cnode0] in failed state. 00:22:29.571 [2024-09-27 13:27:30.916392] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: negotiate 00:22:29.571 [2024-09-27 13:27:30.916538] nvme_auth.c: 796:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] digest: 1 (sha256) 00:22:29.571 [2024-09-27 13:27:30.916649] nvme_auth.c: 804:nvme_auth_send_negotiate: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] dhgroup: 1 (ffdhe2048) 00:22:29.571 [2024-09-27 13:27:30.916835] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-negotiate 00:22:29.571 [2024-09-27 13:27:30.917031] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-challenge 00:22:29.571 ctrlr pubkey: 00:22:29.571 00000000 70 6e bf 25 6f d5 20 16 be d1 39 c5 46 3a 1f 7d pn.%o. ...9.F:.} 00:22:29.571 00000010 c8 c0 59 1e 56 92 77 98 91 a3 a2 a6 e9 5b bc 8b ..Y.V.w......[.. 00:22:29.571 00000020 d1 eb c4 d0 51 2b ca 2a 06 4b be 2c cb 7a 5f 4e ....Q+.*.K.,.z_N 00:22:29.571 00000030 0b 64 e0 c9 12 5c ee f7 93 02 b2 8b 32 22 d2 de .d...\......2".. 00:22:29.571 00000040 f4 1b 6a 22 8a fe 23 01 56 fb 5a 86 9a 7e af 74 ..j"..#.V.Z..~.t 00:22:29.571 00000050 bb 09 be 21 bd 06 23 2d d8 04 c3 65 a4 b6 5d 74 ...!..#-...e..]t 00:22:29.571 00000060 a2 b2 a5 0b f8 68 48 fd 53 63 01 47 96 81 a8 ad .....hH.Sc.G.... 00:22:29.571 00000070 5f a5 84 70 8f 5d a3 cf c4 23 41 10 82 4d 4e 2e _..p.]...#A..MN. 00:22:29.571 00000080 88 ce 32 1a d9 c4 03 1d 10 c6 6a 44 29 73 5e b6 ..2.......jD)s^. 00:22:29.571 00000090 df 40 c1 0f 33 22 01 14 1d 8f b8 d9 19 dc e0 65 .@..3".........e 00:22:29.571 000000a0 0c d9 e5 f2 a9 98 98 79 67 61 0a a5 63 72 ad 81 .......yga..cr.. 00:22:29.571 000000b0 fd 22 eb 4d bc 4c d9 0f d2 a1 4d a8 91 c8 ee 48 .".M.L....M....H 00:22:29.571 000000c0 b5 0d 20 57 31 0e d8 69 20 6e dc b7 92 b8 4c 3f .. W1..i n....L? 00:22:29.571 000000d0 96 7b e2 5a e4 2c 99 6d f0 e8 90 3b bb 7a 5e 55 .{.Z.,.m...;.z^U 00:22:29.571 000000e0 25 0b 7a 87 a2 81 6b 9a 18 7b 6d 5d e8 de 4a ef %.z...k..{m]..J. 00:22:29.571 000000f0 8e 01 d0 ac 12 ec a3 56 e4 2c 53 3a 45 56 8a e2 .......V.,S:EV.. 00:22:29.571 host pubkey: 00:22:29.571 00000000 cb 49 7b d2 17 92 ee 77 71 1b 91 02 16 51 6e 33 .I{....wq....Qn3 00:22:29.571 00000010 32 37 88 ab 93 64 e3 93 83 58 b6 2f 97 24 c3 95 27...d...X./.$.. 00:22:29.571 00000020 e2 ce cb 9c 92 15 be b5 c0 93 aa ca 8c e4 0b 29 ...............) 00:22:29.571 00000030 72 97 e5 39 3d 64 44 93 92 ca c2 62 90 c6 c6 80 r..9=dD....b.... 00:22:29.571 00000040 5f cc e5 e0 1e 7e 75 25 69 4e a7 38 a0 fb 43 17 _....~u%iN.8..C. 00:22:29.571 00000050 02 45 4d 66 42 07 8e 80 df a7 28 40 98 49 70 83 .EMfB.....(@.Ip. 00:22:29.571 00000060 2a f6 c1 3d fa a7 a9 27 a1 65 cb 32 96 4b 06 b9 *..=...'.e.2.K.. 00:22:29.571 00000070 55 9c 94 55 8f a4 85 f5 d5 ea 9a e3 c3 6b 11 f4 U..U.........k.. 00:22:29.571 00000080 19 7d f2 8d 05 b1 32 cc 4e 11 3a 8a 68 99 f6 08 .}....2.N.:.h... 00:22:29.571 00000090 3f a3 b8 94 38 a6 ea 56 8c 11 70 81 55 a8 9c 31 ?...8..V..p.U..1 00:22:29.571 000000a0 d3 68 9c 54 3e 3d 24 c0 6b 93 68 13 8a e6 52 a4 .h.T>=$.k.h...R. 00:22:29.571 000000b0 1a 21 0d 9e 6a 77 af 49 85 3a 8d 7f 12 f3 3f 83 .!..jw.I.:....?. 00:22:29.571 000000c0 a9 b4 e4 e6 0f 1d 48 e9 e4 d4 fe 0b 30 a1 0f 99 ......H.....0... 00:22:29.571 000000d0 52 42 12 ab 79 2c 72 9c ce eb 10 aa 5e c7 90 08 RB..y,r.....^... 00:22:29.571 000000e0 0e e0 8e e4 3e 9d 6d c2 7a 81 6c 71 e9 5a 12 50 ....>.m.z.lq.Z.P 00:22:29.571 000000f0 dc b1 64 ce d5 b7 dd f2 92 06 b7 16 f3 d8 6b 73 ..d...........ks 00:22:29.571 dh secret: 00:22:29.571 00000000 40 e1 be b7 4b 72 29 d6 36 e8 3a fd a1 b8 a5 60 @...Kr).6.:....` 00:22:29.571 00000010 62 d5 7e 1d b8 a5 20 8b dd bd 07 04 b4 12 72 e6 b.~... .......r. 00:22:29.571 00000020 87 8a bf 5a e9 69 38 2e 64 84 46 ea 45 ce 19 38 ...Z.i8.d.F.E..8 00:22:29.571 00000030 ba 9f 02 e9 31 df f8 98 30 a2 cf 66 84 3a f2 8f ....1...0..f.:.. 00:22:29.571 00000040 1a 4d 5a 80 f4 46 17 72 0c 61 0a cc a0 79 aa 8f .MZ..F.r.a...y.. 00:22:29.571 00000050 f4 d9 bb ca da 93 8a 37 62 45 7d 3a 43 90 eb 6f .......7bE}:C..o 00:22:29.571 00000060 9f 9f b2 24 29 30 8d a7 22 82 3f e1 82 d5 c5 ac ...$)0..".?..... 00:22:29.571 00000070 19 e7 2a 28 ad 85 a6 cb 69 c5 47 ec e4 14 86 2a ..*(....i.G....* 00:22:29.571 00000080 ae 5d 20 80 4b 0d 7e 5b 67 36 3c d2 57 1b b5 92 .] .K.~[g6<.W... 00:22:29.571 00000090 7d 41 61 5f c6 e0 d1 3e 69 87 0c 93 27 e5 df 8a }Aa_...>i...'... 00:22:29.571 000000a0 1a 9e 77 03 f1 2c f2 70 c2 47 34 ba 3d 7f 4e a7 ..w..,.p.G4.=.N. 00:22:29.571 000000b0 03 20 be 11 88 9b d7 20 84 a3 e2 08 c8 85 04 73 . ..... .......s 00:22:29.571 000000c0 2d 47 14 af 0b 3a a2 e8 58 50 bd e7 50 c2 f6 e3 -G...:..XP..P... 00:22:29.571 000000d0 8b 2d 19 a8 69 a4 bd 12 a3 fd dc 48 23 39 fb cf .-..i......H#9.. 00:22:29.571 000000e0 67 e1 71 6f 21 08 d3 7c 01 33 e7 51 ca de 99 81 g.qo!..|.3.Q.... 00:22:29.571 000000f0 af c5 f3 af 1d 8b 1d 00 6e 31 87 d3 e1 af 38 c8 ........n1....8. 00:22:29.571 [2024-09-27 13:27:30.923723] nvme_auth.c: 950:nvme_auth_send_reply: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] key=key1, hash=1, dhgroup=1, seq=3775755324, tid=0, subnqn=nqn.2024-02.io.spdk:cnode0, hostnqn=nqn.2024-02.io.spdk:host0, len=32 00:22:29.571 [2024-09-27 13:27:30.923956] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-reply 00:22:29.571 [2024-09-27 13:27:30.928492] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-success1 00:22:29.571 [2024-09-27 13:27:30.928813] nvme_auth.c:1053:nvme_auth_check_success1: *ERROR*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] controller challenge mismatch 00:22:29.571 received: 00:22:29.571 00000000 4b 75 8f 2a 7e 6a 36 80 98 0b b0 02 08 7b a2 f5 Ku.*~j6......{.. 00:22:29.571 00000010 de 61 fa 78 83 9f b6 b4 82 22 00 cc 2f 9e 35 cb .a.x....."../.5. 00:22:29.571 expected: 00:22:29.571 00000000 99 89 70 a2 29 68 7a ff 0d 7c f4 a4 ad 4a 3b a4 ..p.)hz..|...J;. 00:22:29.571 00000010 cf 5d 54 a5 1a 13 68 59 89 91 ca 7f af 30 6e f3 .]T...hY.....0n. 00:22:29.571 [2024-09-27 13:27:30.929253] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: await-failure2 00:22:29.571 [2024-09-27 13:27:30.930775] nvme_auth.c: 163:nvme_auth_set_state: *DEBUG*: [nqn.2024-02.io.spdk:cnode0:nqn.2024-02.io.spdk:host0:0] auth state: done 00:22:29.571 [2024-09-27 13:27:30.930863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b6eaa0 00:22:29.571 [2024-09-27 13:27:30.930994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6eaa0 (9): Bad file descriptor 00:22:29.571 [2024-09-27 13:27:30.931966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2024-02.io.spdk:cnode0] Ctrlr is in error state 00:22:29.571 [2024-09-27 13:27:30.932117] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.1 00:22:29.571 [2024-09-27 13:27:30.932216] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.1 trsvcid=4420 subnqn=nqn.2024-02.io.spdk:cnode0, Operation not permitted 00:22:29.571 [2024-09-27 13:27:30.932297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2024-02.io.spdk:cnode0] in failed state. 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1 -- # cleanup 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # nvmfcleanup 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:29.571 rmmod nvme_tcp 00:22:29.571 rmmod nvme_fabrics 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # '[' -n 77647 ']' 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@333 -- # killprocess 77647 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 77647 ']' 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 77647 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77647 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:29.571 killing process with pid 77647 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77647' 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 77647 00:22:29.571 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 77647 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # '[' '' == iso ']' 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # nvmf_fini 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@264 -- # local dev 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@131 -- # delete_dev nvmf_br 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@123 -- # local dev=nvmf_br in_ns= 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # eval ' ip link delete nvmf_br' 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # ip link delete nvmf_br 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator0/address ]] 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@276 -- # delete_dev initiator0 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@123 -- # local dev=initiator0 in_ns= 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator0' 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # ip link delete initiator0 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/initiator1/address ]] 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@275 -- # (( 3 == 3 )) 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@276 -- # delete_dev initiator1 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@123 -- # local dev=initiator1 in_ns= 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@124 -- # [[ -n '' ]] 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # eval ' ip link delete initiator1' 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # ip link delete initiator1 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target0/address ]] 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # continue 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/target1/address ]] 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # continue 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@284 -- # iptr 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@538 -- # iptables-save 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@538 -- # iptables-restore 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@538 -- # grep -v SPDK_NVMF 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # echo 0 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # modules=(/sys/module/nvmet/holders/*) 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # modprobe -r nvmet_tcp nvmet 00:22:29.831 13:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:30.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:30.421 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:30.680 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:30.680 13:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Yjw /tmp/spdk.key-null.WNw /tmp/spdk.key-sha256.BXJ /tmp/spdk.key-sha384.Pap /tmp/spdk.key-sha512.YWd /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:22:30.680 13:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:30.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:30.938 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:30.938 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:30.938 13:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # trap - ERR 00:22:30.938 13:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # print_backtrace 00:22:30.938 13:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:22:30.938 13:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh' 'nvmf_auth_host' '--transport=tcp') 00:22:30.938 13:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1155 -- # local args 00:22:30.938 13:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1157 -- # xtrace_disable 00:22:30.938 13:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.938 ========== Backtrace start: ========== 00:22:30.938 00:22:30.938 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_auth_host"],["/home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh"],["--transport=tcp"]) 00:22:30.938 ... 00:22:30.938 1120 timing_enter $test_name 00:22:30.938 1121 echo "************************************" 00:22:30.938 1122 echo "START TEST $test_name" 00:22:30.938 1123 echo "************************************" 00:22:30.938 1124 xtrace_restore 00:22:30.938 1125 time "$@" 00:22:30.938 1126 xtrace_disable 00:22:30.938 1127 echo "************************************" 00:22:30.938 1128 echo "END TEST $test_name" 00:22:30.939 1129 echo "************************************" 00:22:30.939 1130 timing_exit $test_name 00:22:30.939 ... 00:22:30.939 in /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh:27 -> main(["--transport=tcp"]) 00:22:30.939 ... 00:22:30.939 22 run_test "nvmf_fio_host" $rootdir/test/nvmf/host/fio.sh "${TEST_ARGS[@]}" 00:22:30.939 23 run_test "nvmf_failover" $rootdir/test/nvmf/host/failover.sh "${TEST_ARGS[@]}" 00:22:30.939 24 run_test "nvmf_host_multipath_status" $rootdir/test/nvmf/host/multipath_status.sh "${TEST_ARGS[@]}" 00:22:30.939 25 run_test "nvmf_discovery_remove_ifc" $rootdir/test/nvmf/host/discovery_remove_ifc.sh "${TEST_ARGS[@]}" 00:22:30.939 26 run_test "nvmf_identify_kernel_target" "$rootdir/test/nvmf/host/identify_kernel_nvmf.sh" "${TEST_ARGS[@]}" 00:22:30.939 => 27 run_test "nvmf_auth_host" "$rootdir/test/nvmf/host/auth.sh" "${TEST_ARGS[@]}" 00:22:30.939 28 run_test "nvmf_bdevperf" "$rootdir/test/nvmf/host/bdevperf.sh" "${TEST_ARGS[@]}" 00:22:30.939 29 run_test "nvmf_target_disconnect" "$rootdir/test/nvmf/host/target_disconnect.sh" "${TEST_ARGS[@]}" 00:22:30.939 30 00:22:30.939 31 if [[ "$SPDK_TEST_NVMF_TRANSPORT" == "tcp" ]]; then 00:22:30.939 32 run_test "nvmf_digest" "$rootdir/test/nvmf/host/digest.sh" "${TEST_ARGS[@]}" 00:22:30.939 ... 00:22:30.939 00:22:30.939 ========== Backtrace end ========== 00:22:30.939 13:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1194 -- # return 0 00:22:30.939 00:22:30.939 real 0m37.133s 00:22:30.939 user 0m33.032s 00:22:30.939 sys 0m3.689s 00:22:30.939 13:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1 -- # exit 1 00:22:30.939 13:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # trap - ERR 00:22:30.939 13:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # print_backtrace 00:22:30.939 13:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:22:30.939 13:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh' 'nvmf_host' '--transport=tcp') 00:22:30.939 13:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # local args 00:22:30.939 13:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1157 -- # xtrace_disable 00:22:30.939 13:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.939 ========== Backtrace start: ========== 00:22:30.939 00:22:30.939 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_host"],["/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh"],["--transport=tcp"]) 00:22:30.939 ... 00:22:30.939 1120 timing_enter $test_name 00:22:30.939 1121 echo "************************************" 00:22:30.939 1122 echo "START TEST $test_name" 00:22:30.939 1123 echo "************************************" 00:22:30.939 1124 xtrace_restore 00:22:30.939 1125 time "$@" 00:22:30.939 1126 xtrace_disable 00:22:30.939 1127 echo "************************************" 00:22:30.939 1128 echo "END TEST $test_name" 00:22:30.939 1129 echo "************************************" 00:22:30.939 1130 timing_exit $test_name 00:22:30.939 ... 00:22:30.939 in /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh:12 -> main(["--transport=tcp"]) 00:22:30.939 ... 00:22:30.939 7 rootdir=$(readlink -f $testdir/../..) 00:22:30.939 8 source $rootdir/test/common/autotest_common.sh 00:22:30.939 9 00:22:30.939 10 run_test "nvmf_target_core" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:22:30.939 11 run_test "nvmf_target_extra" $rootdir/test/nvmf/nvmf_target_extra.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:22:30.939 => 12 run_test "nvmf_host" $rootdir/test/nvmf/nvmf_host.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:22:30.939 13 00:22:30.939 14 # Interrupt mode for now is supported only on the target, with the TCP transport and posix or ssl socket implementations. 00:22:30.939 15 if [[ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" && $SPDK_TEST_URING -eq 0 ]]; then 00:22:30.939 16 run_test "nvmf_target_core_interrupt_mode" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:22:30.939 17 run_test "nvmf_interrupt" $rootdir/test/nvmf/target/interrupt.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:22:30.939 ... 00:22:30.939 00:22:30.939 ========== Backtrace end ========== 00:22:30.939 13:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1194 -- # return 0 00:22:30.939 00:22:30.939 real 2m35.149s 00:22:30.939 user 6m55.525s 00:22:30.939 sys 0m32.774s 00:22:30.939 13:27:32 nvmf_tcp -- common/autotest_common.sh@1125 -- # trap - ERR 00:22:30.939 13:27:32 nvmf_tcp -- common/autotest_common.sh@1125 -- # print_backtrace 00:22:30.939 13:27:32 nvmf_tcp -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:22:30.939 13:27:32 nvmf_tcp -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh' 'nvmf_tcp' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:22:30.939 13:27:32 nvmf_tcp -- common/autotest_common.sh@1155 -- # local args 00:22:30.939 13:27:32 nvmf_tcp -- common/autotest_common.sh@1157 -- # xtrace_disable 00:22:30.939 13:27:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:30.939 ========== Backtrace start: ========== 00:22:30.939 00:22:30.939 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_tcp"],["/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh"],["--transport=tcp"]) 00:22:30.939 ... 00:22:30.939 1120 timing_enter $test_name 00:22:30.939 1121 echo "************************************" 00:22:30.939 1122 echo "START TEST $test_name" 00:22:30.939 1123 echo "************************************" 00:22:30.939 1124 xtrace_restore 00:22:30.939 1125 time "$@" 00:22:30.939 1126 xtrace_disable 00:22:30.939 1127 echo "************************************" 00:22:30.939 1128 echo "END TEST $test_name" 00:22:30.939 1129 echo "************************************" 00:22:30.939 1130 timing_exit $test_name 00:22:30.939 ... 00:22:30.939 in /home/vagrant/spdk_repo/spdk/autotest.sh:280 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:22:30.939 ... 00:22:30.939 275 # list of all tests can properly differentiate them. Please do not merge them into one line. 00:22:30.939 276 if [ "$SPDK_TEST_NVMF_TRANSPORT" = "rdma" ]; then 00:22:30.939 277 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:22:30.939 278 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:22:30.939 279 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:22:30.939 => 280 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:22:30.939 281 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:22:30.939 282 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:22:30.939 283 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:22:30.939 284 fi 00:22:30.939 285 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:22:30.939 ... 00:22:30.939 00:22:30.939 ========== Backtrace end ========== 00:22:30.939 13:27:32 nvmf_tcp -- common/autotest_common.sh@1194 -- # return 0 00:22:30.939 00:22:30.939 real 10m44.217s 00:22:30.939 user 25m46.061s 00:22:30.939 sys 2m33.360s 00:22:30.939 13:27:32 nvmf_tcp -- common/autotest_common.sh@1 -- # autotest_cleanup 00:22:30.939 13:27:32 nvmf_tcp -- common/autotest_common.sh@1392 -- # local autotest_es=1 00:22:30.939 13:27:32 nvmf_tcp -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:30.939 13:27:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:43.140 INFO: APP EXITING 00:22:43.140 INFO: killing all VMs 00:22:43.140 INFO: killing vhost app 00:22:43.140 INFO: EXIT DONE 00:22:43.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:43.140 Waiting for block devices as requested 00:22:43.140 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:43.140 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:44.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:44.073 Cleaning 00:22:44.073 Removing: /var/run/dpdk/spdk0/config 00:22:44.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:44.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:44.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:44.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:44.073 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:44.073 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:44.073 Removing: /var/run/dpdk/spdk1/config 00:22:44.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:44.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:44.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:44.073 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:44.331 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:44.331 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:44.331 Removing: /var/run/dpdk/spdk2/config 00:22:44.331 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:44.331 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:44.331 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:44.331 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:44.331 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:44.331 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:44.331 Removing: /var/run/dpdk/spdk3/config 00:22:44.331 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:44.331 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:44.331 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:44.331 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:44.332 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:44.332 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:44.332 Removing: /var/run/dpdk/spdk4/config 00:22:44.332 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:44.332 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:44.332 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:44.332 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:44.332 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:44.332 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:44.332 Removing: /dev/shm/nvmf_trace.0 00:22:44.332 Removing: /dev/shm/spdk_tgt_trace.pid56728 00:22:44.332 Removing: /var/run/dpdk/spdk0 00:22:44.332 Removing: /var/run/dpdk/spdk1 00:22:44.332 Removing: /var/run/dpdk/spdk2 00:22:44.332 Removing: /var/run/dpdk/spdk3 00:22:44.332 Removing: /var/run/dpdk/spdk4 00:22:44.332 Removing: /var/run/dpdk/spdk_pid56581 00:22:44.332 Removing: /var/run/dpdk/spdk_pid56728 00:22:44.332 Removing: /var/run/dpdk/spdk_pid56934 00:22:44.332 Removing: /var/run/dpdk/spdk_pid57015 00:22:44.332 Removing: /var/run/dpdk/spdk_pid57035 00:22:44.332 Removing: /var/run/dpdk/spdk_pid57145 00:22:44.332 Removing: /var/run/dpdk/spdk_pid57154 00:22:44.332 Removing: /var/run/dpdk/spdk_pid57289 00:22:44.332 Removing: /var/run/dpdk/spdk_pid57485 00:22:44.332 Removing: /var/run/dpdk/spdk_pid57639 00:22:44.332 Removing: /var/run/dpdk/spdk_pid57717 00:22:44.332 Removing: /var/run/dpdk/spdk_pid57788 00:22:44.332 Removing: /var/run/dpdk/spdk_pid57879 00:22:44.332 Removing: /var/run/dpdk/spdk_pid57959 00:22:44.332 Removing: /var/run/dpdk/spdk_pid57997 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58033 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58097 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58200 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58633 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58685 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58723 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58739 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58806 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58815 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58876 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58885 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58930 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58941 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58981 00:22:44.332 Removing: /var/run/dpdk/spdk_pid58999 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59135 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59165 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59247 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59574 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59586 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59617 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59636 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59646 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59665 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59684 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59699 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59724 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59732 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59753 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59772 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59780 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59801 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59820 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59838 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59849 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59868 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59887 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59897 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59933 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59941 00:22:44.332 Removing: /var/run/dpdk/spdk_pid59976 00:22:44.332 Removing: /var/run/dpdk/spdk_pid60048 00:22:44.332 Removing: /var/run/dpdk/spdk_pid60071 00:22:44.332 Removing: /var/run/dpdk/spdk_pid60085 00:22:44.332 Removing: /var/run/dpdk/spdk_pid60109 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60119 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60126 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60169 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60182 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60211 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60220 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60229 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60239 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60243 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60257 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60264 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60274 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60302 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60329 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60338 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60367 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60376 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60384 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60424 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60436 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60462 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60470 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60477 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60487 00:22:44.590 Removing: /var/run/dpdk/spdk_pid60494 00:22:44.591 Removing: /var/run/dpdk/spdk_pid60502 00:22:44.591 Removing: /var/run/dpdk/spdk_pid60509 00:22:44.591 Removing: /var/run/dpdk/spdk_pid60517 00:22:44.591 Removing: /var/run/dpdk/spdk_pid60593 00:22:44.591 Removing: /var/run/dpdk/spdk_pid60641 00:22:44.591 Removing: /var/run/dpdk/spdk_pid60753 00:22:44.591 Removing: /var/run/dpdk/spdk_pid60787 00:22:44.591 Removing: /var/run/dpdk/spdk_pid60832 00:22:44.591 Removing: /var/run/dpdk/spdk_pid60842 00:22:44.591 Removing: /var/run/dpdk/spdk_pid60863 00:22:44.591 Removing: /var/run/dpdk/spdk_pid60883 00:22:44.591 Removing: /var/run/dpdk/spdk_pid60909 00:22:44.591 Removing: /var/run/dpdk/spdk_pid60930 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61009 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61027 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61071 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61149 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61205 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61234 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61328 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61371 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61403 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61635 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61727 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61760 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61785 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61819 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61857 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61892 00:22:44.591 Removing: /var/run/dpdk/spdk_pid61923 00:22:44.591 Removing: /var/run/dpdk/spdk_pid62313 00:22:44.591 Removing: /var/run/dpdk/spdk_pid62351 00:22:44.591 Removing: /var/run/dpdk/spdk_pid62691 00:22:44.591 Removing: /var/run/dpdk/spdk_pid63149 00:22:44.591 Removing: /var/run/dpdk/spdk_pid63425 00:22:44.591 Removing: /var/run/dpdk/spdk_pid64266 00:22:44.591 Removing: /var/run/dpdk/spdk_pid65753 00:22:44.591 Removing: /var/run/dpdk/spdk_pid66532 00:22:44.591 Removing: /var/run/dpdk/spdk_pid66649 00:22:44.591 Removing: /var/run/dpdk/spdk_pid66717 00:22:44.591 Removing: /var/run/dpdk/spdk_pid67099 00:22:44.591 Removing: /var/run/dpdk/spdk_pid71122 00:22:44.591 Removing: /var/run/dpdk/spdk_pid71491 00:22:44.591 Removing: /var/run/dpdk/spdk_pid71600 00:22:44.591 Removing: /var/run/dpdk/spdk_pid71741 00:22:44.591 Removing: /var/run/dpdk/spdk_pid71768 00:22:44.591 Removing: /var/run/dpdk/spdk_pid71785 00:22:44.591 Removing: /var/run/dpdk/spdk_pid71806 00:22:44.591 Removing: /var/run/dpdk/spdk_pid71890 00:22:44.591 Removing: /var/run/dpdk/spdk_pid72019 00:22:44.591 Removing: /var/run/dpdk/spdk_pid72194 00:22:44.591 Removing: /var/run/dpdk/spdk_pid72281 00:22:44.591 Removing: /var/run/dpdk/spdk_pid72468 00:22:44.591 Removing: /var/run/dpdk/spdk_pid72557 00:22:44.591 Removing: /var/run/dpdk/spdk_pid72655 00:22:44.591 Removing: /var/run/dpdk/spdk_pid73003 00:22:44.591 Removing: /var/run/dpdk/spdk_pid73412 00:22:44.591 Removing: /var/run/dpdk/spdk_pid73413 00:22:44.591 Removing: /var/run/dpdk/spdk_pid73414 00:22:44.591 Removing: /var/run/dpdk/spdk_pid73676 00:22:44.591 Removing: /var/run/dpdk/spdk_pid74006 00:22:44.591 Removing: /var/run/dpdk/spdk_pid74008 00:22:44.591 Removing: /var/run/dpdk/spdk_pid74340 00:22:44.591 Removing: /var/run/dpdk/spdk_pid74360 00:22:44.591 Removing: /var/run/dpdk/spdk_pid74374 00:22:44.591 Removing: /var/run/dpdk/spdk_pid74401 00:22:44.849 Removing: /var/run/dpdk/spdk_pid74416 00:22:44.849 Removing: /var/run/dpdk/spdk_pid74758 00:22:44.849 Removing: /var/run/dpdk/spdk_pid74811 00:22:44.849 Removing: /var/run/dpdk/spdk_pid75130 00:22:44.849 Removing: /var/run/dpdk/spdk_pid75320 00:22:44.849 Removing: /var/run/dpdk/spdk_pid75764 00:22:44.849 Removing: /var/run/dpdk/spdk_pid76661 00:22:44.849 Removing: /var/run/dpdk/spdk_pid77294 00:22:44.849 Removing: /var/run/dpdk/spdk_pid77296 00:22:44.849 Clean 00:22:51.406 13:27:52 nvmf_tcp -- common/autotest_common.sh@1451 -- # return 1 00:22:51.406 13:27:52 nvmf_tcp -- common/autotest_common.sh@1 -- # : 00:22:51.406 13:27:52 nvmf_tcp -- common/autotest_common.sh@1 -- # exit 1 00:22:51.416 [Pipeline] } 00:22:51.434 [Pipeline] // timeout 00:22:51.441 [Pipeline] } 00:22:51.459 [Pipeline] // stage 00:22:51.466 [Pipeline] } 00:22:51.470 ERROR: script returned exit code 1 00:22:51.470 Setting overall build result to FAILURE 00:22:51.484 [Pipeline] // catchError 00:22:51.494 [Pipeline] stage 00:22:51.496 [Pipeline] { (Stop VM) 00:22:51.509 [Pipeline] sh 00:22:51.847 + vagrant halt 00:22:55.130 ==> default: Halting domain... 00:23:01.703 [Pipeline] sh 00:23:01.981 + vagrant destroy -f 00:23:06.166 ==> default: Removing domain... 00:23:06.177 [Pipeline] sh 00:23:06.457 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:06.466 [Pipeline] } 00:23:06.481 [Pipeline] // stage 00:23:06.487 [Pipeline] } 00:23:06.501 [Pipeline] // dir 00:23:06.506 [Pipeline] } 00:23:06.521 [Pipeline] // wrap 00:23:06.527 [Pipeline] } 00:23:06.540 [Pipeline] // catchError 00:23:06.550 [Pipeline] stage 00:23:06.552 [Pipeline] { (Epilogue) 00:23:06.566 [Pipeline] sh 00:23:06.847 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:09.387 [Pipeline] catchError 00:23:09.389 [Pipeline] { 00:23:09.401 [Pipeline] sh 00:23:09.680 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:09.939 Artifacts sizes are good 00:23:09.948 [Pipeline] } 00:23:09.965 [Pipeline] // catchError 00:23:09.977 [Pipeline] archiveArtifacts 00:23:09.983 Archiving artifacts 00:23:10.255 [Pipeline] cleanWs 00:23:10.268 [WS-CLEANUP] Deleting project workspace... 00:23:10.268 [WS-CLEANUP] Deferred wipeout is used... 00:23:10.275 [WS-CLEANUP] done 00:23:10.276 [Pipeline] } 00:23:10.297 [Pipeline] // stage 00:23:10.303 [Pipeline] } 00:23:10.320 [Pipeline] // node 00:23:10.327 [Pipeline] End of Pipeline 00:23:10.375 Finished: FAILURE